diff --git a/docs/generics/api-token.md b/docs/generics/api-token.md index 95294ea1..40b9f622 100644 --- a/docs/generics/api-token.md +++ b/docs/generics/api-token.md @@ -7,7 +7,7 @@ path: "/docs/api-token" # API Token -Applications (see [TriplyDB.js](/triplydb-js)) and pipelines (see [TriplyETL](/triply-etl)) often require access rights to interact with TriplyDB instances. Specifically, reading non-public data and writing any (public or non-public) data requires setting an API token. The token ensures that only users that are specifically authorized for certain datasets are able to access and/or modify those datasets. +Applications (see [TriplyDB.js](../../triplydb-js)) and pipelines (see [TriplyETL](../../triply-etl)) often require access rights to interact with TriplyDB instances. Specifically, reading non-public data and writing any (public or non-public) data requires setting an API token. The token ensures that only users that are specifically authorized for certain datasets are able to access and/or modify those datasets. The following steps must be performed in order to create an API token: diff --git a/docs/triply-api/index.md b/docs/triply-api/index.md index afd4c5c3..a68e42d1 100644 --- a/docs/triply-api/index.md +++ b/docs/triply-api/index.md @@ -101,14 +101,14 @@ curl -H 'Authorization: Bearer TOKEN' -H 'Content-Type: application/json' -X POS Upper-case letter words in json after `-d` must be replaced by the following values: - `NAME` :: The name of the dataset in the url. -- `ACCESS_LEVEL` :: *public*, *private* or *internal*. For more information visit [Access levels in TriplyDB](/triply-db-getting-started/reference/#access-levels). +- `ACCESS_LEVEL` :: *public*, *private* or *internal*. For more information visit [Access levels in TriplyDB](../triply-db-getting-started/reference/#access-levels). - `DISPLAY_NAME` :: The display name of the dataset. ### Upload data to a dataset You can upload a data file via the Triply API. You need to [use the API Token](#using-the-api-token) and send an HTTP POST request with data specifying the local file path. -The list of supported file extentions can be checked in [Adding data: File upload](/triply-db-getting-started/uploading-data/#adding-data-file-upload) documentation. +The list of supported file extensions can be checked in [Adding data: File upload](../triply-db-getting-started/uploading-data/#adding-data-file-upload) documentation. The example of the URI: ```sh @@ -190,7 +190,7 @@ The above example includes the following GRLC annotations: ## LD Browser API -Triply APIs provide a convenient way to access data used by [LD Browser](/triply-db-getting-started/viewing-data/#linked-data-browser), which offers a comprehensive overview of a specific IRI. By using Triply API for a specific IRI, you can retrieve the associated 'document' in the `.nt` format that describes the IRI. +Triply APIs provide a convenient way to access data used by [LD Browser](../triply-db-getting-started/viewing-data/#linked-data-browser), which offers a comprehensive overview of a specific IRI. By using Triply API for a specific IRI, you can retrieve the associated 'document' in the `.nt` format that describes the IRI. To make an API request for a specific instance, you can use the following URI path: @@ -203,7 +203,7 @@ To illustrate this, let's take the example of the DBpedia dataset and the [speci ```none https://api.triplydb.com/datasets/DBpedia-association/dbpedia/describe.nt?resource=http%3A%2F%2Fdbpedia.org%2Fresource%2FMona_Lisa ``` -in your browser, the `.nt` document describing the 'Mona Lisa' instance will be automatically downloaded. You can then upload this file to a dataset and [visualize it in a graph](/yasgui/#network-triplydb-plugin). Figure 1 illustrates the retrieved graph for the ‘Mona Lisa’ instance. +in your browser, the `.nt` document describing the 'Mona Lisa' instance will be automatically downloaded. You can then upload this file to a dataset and [visualize it in a graph](../yasgui/#network-triplydb-plugin). Figure 1 illustrates the retrieved graph for the ‘Mona Lisa’ instance. ![Figure 1](../assets/MonaLisaGraph.png) @@ -381,7 +381,7 @@ Everybody who has access to the dataset also has access to its services, includi - For *Internal* datasets, only users that are logged into the triple store can issue queries. - For *Private* datasets, only users that are logged into the triple store and are members of `ACCOUNT` can issue queries. -Notice that for professional use it is easier and better to use [saved queries](/triply-db-getting-started/saved-queries/#saved-queries). Saved queries have persistent URIs, descriptive metadata, versioning, and support for reliable large-scale pagination ([see how to use pagination with saved query API](/triply-db-getting-started/saved-queries/#pagination-with-the-saved-query-api)). Still, if you do not have a saved query at your disposal and want to perform a custom SPARQL request against an accessible endpoint, you can do so. TriplyDB implements the SPARQL 1.1 Query Protocol standard for this purpose. +Notice that for professional use it is easier and better to use [saved queries](../triply-db-getting-started/saved-queries/#saved-queries). Saved queries have persistent URIs, descriptive metadata, versioning, and support for reliable large-scale pagination ([see how to use pagination with saved query API](../triply-db-getting-started/saved-queries/#pagination-with-the-saved-query-api)). Still, if you do not have a saved query at your disposal and want to perform a custom SPARQL request against an accessible endpoint, you can do so. TriplyDB implements the SPARQL 1.1 Query Protocol standard for this purpose. ### Sending a SPARQL Query request diff --git a/docs/triply-db-getting-started/index.md b/docs/triply-db-getting-started/index.md index 030cbf2f..2e8e2621 100644 --- a/docs/triply-db-getting-started/index.md +++ b/docs/triply-db-getting-started/index.md @@ -10,4 +10,4 @@ path: "/docs/triply-db-getting-started" TriplyDB allows you to store, publish, and use linked data Knowledge Graphs. TriplyDB makes it easy to upload linked data and expose it through various APIs (SPARQL, Elasticsearch, LDF, REST). [Read -More](/triply-api) +More](../triply-api) diff --git a/docs/triply-db-getting-started/saved-queries/index.md b/docs/triply-db-getting-started/saved-queries/index.md index 9c2a6645..e82cfb6f 100644 --- a/docs/triply-db-getting-started/saved-queries/index.md +++ b/docs/triply-db-getting-started/saved-queries/index.md @@ -12,7 +12,7 @@ removing the hassle of figuring out how to run a SPARQL query. There are two ways to create a saved query. _You need to be logged in and have authorization rights on the dataset to use this feature_ -1. When working from the [SPARQL IDE](/triply-db-getting-started/viewing-data#sparql-ide) +1. When working from the [SPARQL IDE](../viewing-data#sparql-ide) 2. Using the Saved Queries tab in a dataset Creating a saved query with the SPARQL IDE is done by writing a query/visualization and hitting the save button ![The save query button highlighted](../../assets/save-query-highlighted.png) @@ -31,7 +31,7 @@ If you want to delete a saved query, you can do so by clicking the three dots on ### Sharing a saved query -To share a saved query, for example in [Data Stories](/triply-db-getting-started/data-stories#data-stories), you can copy the link that is +To share a saved query, for example in [Data Stories](../data-stories#data-stories), you can copy the link that is used when you open the query in TriplyDB. Let's say you have a query called `Timelined-Cars-BETA` in the dataset `core` under the account `dbpedia` and you want to use version 9. Then the following link would be used @@ -50,7 +50,7 @@ https://triplydb.com/DBpedia-association/-/queries/timeline-cars The result of a query can be downloaded via the TriplyDB interface. After saving the query, open it in TriplyDB. e.g. . -You can download results in different data format, depending on which [visualization option](/yasgui/#visualizations) you use. +You can download results in different data format, depending on which [visualization option](../../yasgui/#visualizations) you use. For example, if you want to download the results in a `.json` format, you can choose the option `Response` and click on the download icon or scroll down and click on `Download result`. ![Download the query result via the download icon.](../../assets/queryResult.png) @@ -62,16 +62,16 @@ Below is a table of all supported visualizations and what format of results they | **Visualization option** | **Result data format** | | ------------------------------------------------------------ | ---------------------- | -| [Table](/yasgui/#table) | `.csv` | -| [Response](/yasgui/#response) | `.json` | -| [Gallery](/yasgui/#gallery-triplydb-plugin) | Download not supported | -| [Chart](/yasgui/#chart-triplydb-plugin) | `.svg` | -| [Geo](/yasgui/#geo-triplydb-plugin) | Download not supported | -| [Geo-3D](/yasgui/#geo-3d-triplydb-only) | Download not supported | -| [Geo events](/yasgui/#geo-events-triplydb-plugin)| Download not supported | -| [Markup](/yasgui/#markup-triplydb-plugin) | `.svg`, `.html` | -| [Network](/yasgui/#network-triplydb-plugin) | `.png` | -| [Timeline](/yasgui/#timeline-triplydb-plugin) | Download not supported | +| [Table](../../yasgui/#table) | `.csv` | +| [Response](../../yasgui/#response) | `.json` | +| [Gallery](../../yasgui/#gallery-triplydb-plugin) | Download not supported | +| [Chart](../../yasgui/#chart-triplydb-plugin) | `.svg` | +| [Geo](../../yasgui/#geo-triplydb-plugin) | Download not supported | +| [Geo-3D](../../yasgui/#geo-3d-triplydb-only) | Download not supported | +| [Geo events](../../yasgui/#geo-events-triplydb-plugin)| Download not supported | +| [Markup](../../yasgui/#markup-triplydb-plugin) | `.svg`, `.html` | +| [Network](../../yasgui/#network-triplydb-plugin) | `.png` | +| [Timeline](../../yasgui/#timeline-triplydb-plugin) | Download not supported | @@ -105,7 +105,7 @@ link: #### Pagination with TriplyDB.js -**TriplyDB.js** is the official programming library for interacting with [TriplyDB](/triply-db-getting-started/). TriplyDB.js allows the user to connect to a TriplyDB instance via the TypeScript language. TriplyDB.js has the advantage that it can handle pagination internally so it can reliably retrieve a large number of results. +**TriplyDB.js** is the official programming library for interacting with [TriplyDB](../). TriplyDB.js allows the user to connect to a TriplyDB instance via the TypeScript language. TriplyDB.js has the advantage that it can handle pagination internally so it can reliably retrieve a large number of results. To get the output for a `construct` or `select` query, follow these steps: @@ -193,7 +193,7 @@ SPARQL queries as a RESTful API, also means you can transport your data to your Clicking the '' button opens the code snippet screen. Here you select the snippet in the language you want to have, either Python or R. You can then copy the snippet, by clicking the 'copy to clipboard' button or selecting the snippet and pressing `ctrl-c`. Now you can paste the code in the location you want to use the data. The data is stored in the `data` variable in `JSON` format. -When the SPARQL query is not public, but instead either private or internal, you will need to add an authorization header to the get request. Without the authorization header the request will return an incorrect response. Checkout [Creating your API token](/triply-api/#creating-an-api-token) about creating your API-token for the authorization header. +When the SPARQL query is not public, but instead either private or internal, you will need to add an authorization header to the get request. Without the authorization header the request will return an incorrect response. Checkout [Creating your API token](../../triply-api/#creating-an-api-token) about creating your API-token for the authorization header. Check out the [SPARQL pagination page](#download-more-than-10-000-query-results-sparql-pagination) when you want to query a SPARQL query that holds more than 10.000 results. The [SPARQL pagination page ](#download-more-than-10-000-query-results-sparql-pagination) will explain how you can retrieve the complete set. @@ -213,4 +213,4 @@ Users can specify additional metadata inside the query string, by using the GRLC #+ frequency: hourly ``` -See the [Triply API documentation](/triply-api#queries) for how to retrieve query metadata, including how to retrieve GRLC annotations. +See the [Triply API documentation](../../triply-api#queries) for how to retrieve query metadata, including how to retrieve GRLC annotations. diff --git a/docs/triply-db-getting-started/uploading-data/index.md b/docs/triply-db-getting-started/uploading-data/index.md index 452d3ed5..b08712f0 100644 --- a/docs/triply-db-getting-started/uploading-data/index.md +++ b/docs/triply-db-getting-started/uploading-data/index.md @@ -22,18 +22,18 @@ The following steps allow a new linked datasets to be created: 5. Optionally, enter a dataset description. You can use rich text formatting by using Markdown. See [our section about - Markdown](/triply-db-getting-started/reference#markdown-support) for details. + Markdown](../reference#markdown-support) for details. 6. Optionally, change the access level for the dataset. By default this is set to “Private”. See [dataset access - levels](/triply-db-getting-started/reference#access-levels) for more information. + levels](../reference#access-levels) for more information. ![The “Add dataset” dialog](../../assets/createdataset.png) -When datasets are Public (see [Access Levels](/triply-db-getting-started/reference#access-levels)), they +When datasets are Public (see [Access Levels](../reference#access-levels)), they automatically expose metadata and are automatically crawled and -indexed by popular search engines (see [Metadata](/triply-db-getting-started/publishing-data#entering-metadata)). +indexed by popular search engines (see [Metadata](../publishing-data#entering-metadata)). ## Adding data diff --git a/docs/triply-db-getting-started/viewing-data/index.md b/docs/triply-db-getting-started/viewing-data/index.md index 16487647..91e7de34 100644 --- a/docs/triply-db-getting-started/viewing-data/index.md +++ b/docs/triply-db-getting-started/viewing-data/index.md @@ -132,7 +132,7 @@ Open Source [Yasgui](../yasgui) query editor. It is often useful to save a SPARQL query for later use. This is achieved by clicking on the save icon in the top-right corner of the -SPARQL Editor. Doing so will create a [Save Query](/triply-db-getting-started/saved-queries#saved-queries). +SPARQL Editor. Doing so will create a [Save Query](../saved-queries#saved-queries). ### Sharing a SPARQL query @@ -148,7 +148,7 @@ from which the SPARQL query can be copied in the following three forms: visualizations. Long URLs are not supported by some application that cut off a URL after a maximum length (often 1,024 characters). Use one of the other two options or use [Saved - Queries](/triply-db-getting-started/saved-queries#saved-queries) to avoid such restrictions. + Queries](../saved-queries#saved-queries) to avoid such restrictions. 2. A short URL that redirects to the full URL-encoded SPARQL query. @@ -156,7 +156,7 @@ from which the SPARQL query can be copied in the following three forms: application that supports this command. cURL is often used by programmers to test HTTP(S) requests. -[Saved Queries](/triply-db-getting-started/saved-queries#saved-queries) are a more modern way of sharing SPARQL queries. +[Saved Queries](../saved-queries#saved-queries) are a more modern way of sharing SPARQL queries. They do not have any of the technical limitations that occur with URL-encoded queries. diff --git a/docs/triply-etl/assert/index.md b/docs/triply-etl/assert/index.md index 71a95c94..22ebf486 100644 --- a/docs/triply-etl/assert/index.md +++ b/docs/triply-etl/assert/index.md @@ -27,8 +27,8 @@ Assertion are statements of fact. In linked data, assertions are commonly calle TriplyETL supports the following assertion approaches: - 3A. **RATT** (RDF All The Things) contains a core set of TypeScript functions for making linked data assertions: - - [RATT Term Assertions](/triply-etl/assert/ratt/term): functions that are used to assert terms (IRIs or literals). - - [RATT Statement Assertions](/triply-etl/assert/ratt/statement): functions that are used to assert statements (triples or quads). + - [RATT Term Assertions](ratt/term): functions that are used to assert terms (IRIs or literals). + - [RATT Statement Assertions](ratt/statement): functions that are used to assert statements (triples or quads). @@ -37,6 +37,6 @@ TriplyETL supports the following assertion approaches: After linked data has been asserted into the Internal Store, the following steps can be performend: -- [4. **Enrich**](/triply-etl/enrich/) improves or extends linked data in the Internal Store. -- [5. **Validate**](/triply-etl/validate) ensures that linked data in the Internal Store is correct. -- [6. **Publish**](/triply-etl/publish) makes linked data available in a Triple Store for others to use. +- [4. **Enrich**](../enrich/) improves or extends linked data in the Internal Store. +- [5. **Validate**](../validate) ensures that linked data in the Internal Store is correct. +- [6. **Publish**](../publish) makes linked data available in a Triple Store for others to use. diff --git a/docs/triply-etl/assert/ratt/statement.md b/docs/triply-etl/assert/ratt/statement.md index 38e4528e..29422d4f 100644 --- a/docs/triply-etl/assert/ratt/statement.md +++ b/docs/triply-etl/assert/ratt/statement.md @@ -29,15 +29,15 @@ import { nestedPairs, objects, pairs, quad, quads, Creates a nested node and makes multiple assertions about that node. -Since linked data is composed of triples, more complex n-ary information must often be asserted by using a nested node. Nested nodes must be IRIs, so they must be specified with [iri()](/triply-etl/assert/ratt/term#function-iri) or [addIri()](/triply-etl/transform/ratt#function-addiri). +Since linked data is composed of triples, more complex n-ary information must often be asserted by using a nested node. Nested nodes must be IRIs, so they must be specified with [iri()](../term#function-iri) or [addIri()](../../../transform/ratt#function-addiri). Signature: `nestedPairs(subject, predicate, nestedNode, pairs...)` ### Parameters -- `subject` A subject term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `predicate` A predicate term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `nestedNode` The nested node. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). +- `subject` A subject term. This must be an IRI (see function [iri()](../term#function-iri)). +- `predicate` A predicate term. This must be an IRI (see function [iri()](../term#function-iri)). +- `nestedNode` The nested node. This must be an IRI (see function [iri()](../term#function-iri)). - `pairs` One or more pairs that make assertions about the nested node. Every pair consists of a predicate term and an object term (in that order). ### Example: Unit of measure @@ -137,9 +137,9 @@ pairs(iri(prefix.geometry, 'id'), ### See also In some cases, it is inconvenient to come up with a naming scheme for intermediate nodes. In such cases, the following options are available: -- Use transformation [addHashedIri()](/triply-etl/transform/ratt#function-addhashediri) to create a content-based IRI. -- Use transformation [addRandomIri()](/triply-etl/transform/ratt#function-addrandomiri) to create a random IRI. -- Use transformation [addSkolemIri()](/triply-etl/transform/ratt#function-addskolemiri) to create a random Skolem IRI. +- Use transformation [addHashedIri()](../../../transform/ratt#function-addhashediri) to create a content-based IRI. +- Use transformation [addRandomIri()](../../../transform/ratt#function-addrandomiri) to create a random IRI. +- Use transformation [addSkolemIri()](../../../transform/ratt#function-addskolemiri) to create a random Skolem IRI. @@ -151,9 +151,9 @@ This function provides a shorthand notation for assertions that can also be made ### Parameters -- `subject` A subject term. This must be either an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)) or a literal (see function [literal()](/triply-etl/assert/ratt/term#function-literal)). -- `predicate` A predicate term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `objects` An array of object terms. This must be either an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)) or a literal (see function [literal](/triply-etl/assert/ratt/term#function-literal)). Every distinct object term in the array results in a distinct triple assertion. +- `subject` A subject term. This must be either an IRI (see function [iri()](../term#function-iri)) or a literal (see function [literal()](../term#function-literal)). +- `predicate` A predicate term. This must be an IRI (see function [iri()](../term#function-iri)). +- `objects` An array of object terms. This must be either an IRI (see function [iri()](../term#function-iri)) or a literal (see function [literal](../term#function-literal)). Every distinct object term in the array results in a distinct triple assertion. ### Example: Alternative labels @@ -266,10 +266,10 @@ A quadruple is a triple with a graph name as its fourth parameter. ### Parameters -- `subject` A subject term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `predicate` A predicate term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `object` An object term. This must be either an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)) or a literal (see function [literal()](/triply-etl/assert/ratt/term#function-literal)). -- `graph` A graph name. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). +- `subject` A subject term. This must be an IRI (see function [iri()](../term#function-iri)). +- `predicate` A predicate term. This must be an IRI (see function [iri()](../term#function-iri)). +- `object` An object term. This must be either an IRI (see function [iri()](../term#function-iri)) or a literal (see function [literal()](../term#function-literal)). +- `graph` A graph name. This must be an IRI (see function [iri()](../term#function-iri)). ### Example: Data and metadata @@ -296,10 +296,10 @@ A quadruple is a triple with a graph name as its fourth parameter. ### Parameters -- `subject` A subject term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `predicate` A predicate term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `object` An object term. This must be either an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)) or a literal (see function [literal()](/triply-etl/assert/ratt/term#function-literal)). -- `graph` A graph name. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). +- `subject` A subject term. This must be an IRI (see function [iri()](../term#function-iri)). +- `predicate` A predicate term. This must be an IRI (see function [iri()](../term#function-iri)). +- `object` An object term. This must be either an IRI (see function [iri()](../term#function-iri)) or a literal (see function [literal()](../term#function-literal)). +- `graph` A graph name. This must be an IRI (see function [iri()](../term#function-iri)). ### Example: Data and metadata @@ -332,9 +332,9 @@ A triple is a sequence of three terms: subject, predicate, and object. A triple ### Parameters -- `subject` A subject term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `predicate` A predicate term. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). -- `object` An object term. This must be either an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)) or a literal (see function [literal()](/triply-etl/assert/ratt/term#function-literal)). +- `subject` A subject term. This must be an IRI (see function [iri()](../term#function-iri)). +- `predicate` A predicate term. This must be an IRI (see function [iri()](../term#function-iri)). +- `object` An object term. This must be either an IRI (see function [iri()](../term#function-iri)) or a literal (see function [literal()](../term#function-literal)). ### Example 1 @@ -350,7 +350,7 @@ triple(iri(prefix.person, 'id'), a, foaf.Person), The following triple asserts that something has an age that is derived from the `'age'` key in the record. Notice that: -- the subject term is an IRI that is stored in the `'_person'` key of the record (possibly created with transformation function [addIri()](/triply-etl/transform/ratt#function-addiri)), +- the subject term is an IRI that is stored in the `'_person'` key of the record (possibly created with transformation function [addIri()](../../../transform/ratt#function-addiri)), - the predicate term is an IRI (`foaf.age`) that is imported from the vocabulary module, - and the object term is a typed literal with a datatype IRI that is imported from the vocabulary module. @@ -368,7 +368,7 @@ Asserts multiple triples in the same named graph: ### Parameters -- `graph` A graph name. This must be an IRI (see function [iri()](/triply-etl/assert/ratt/term#function-iri)). +- `graph` A graph name. This must be an IRI (see function [iri()](../term#function-iri)). - `triples` An array of triples. Every triple is represented by an array of 3 terms: subject, predicate, and object. ### When to use @@ -414,7 +414,7 @@ triples(iri(prefix.ex, 'myGraph'), ## Shorthand notation -Since literals with datatype IRI `xsd:string` are very common, statement assertions allow such literals to be specified without using [`literal()`](/triply-etl/assert/ratt/term#function-literal). +Since literals with datatype IRI `xsd:string` are very common, statement assertions allow such literals to be specified without using [`literal()`](../term#function-literal). For example, the following two triple assertions result in the same linked data: diff --git a/docs/triply-etl/assert/ratt/term.md b/docs/triply-etl/assert/ratt/term.md index 1c1a28ef..ea0662db 100644 --- a/docs/triply-etl/assert/ratt/term.md +++ b/docs/triply-etl/assert/ratt/term.md @@ -32,7 +32,7 @@ iri(content: Key|StaticString) ### Parameters -- `prefix` A prefix that is declared with [declarePrefix()](/triply-etl/declare#declarePrefix). +- `prefix` A prefix that is declared with [declarePrefix()](../../../declare#declarePrefix). - `content` Either a key that contains a string value, or a static string. If the `prefix` is used, this content is placed after the prefix (sometimes referred to as the 'local name'). If the `prefix` parameter is not used, the content must specify a full IRI. ### Examples @@ -58,7 +58,7 @@ triple(iri('url'), a, owl.NamedIndividual), ### See also -If the same IRI is used in multiple statements, repeating the same assertion multiple times may impose a maintenance burden. In such cases, it is possible to first add the IRI to the record using transformation function [addIri()](/triply-etl/transform/ratt#function-addiri), and refer to that one IRI in multiple statements. +If the same IRI is used in multiple statements, repeating the same assertion multiple times may impose a maintenance burden. In such cases, it is possible to first add the IRI to the record using transformation function [addIri()](../../../transform/ratt#function-addiri), and refer to that one IRI in multiple statements. Use function [iris()](#function-iris) to create multiple IRIs in one step. @@ -75,7 +75,7 @@ iris(content: Key|Array) ### Parameters -- `prefix` A prefix that is declared with [declarePrefix()](/triply-etl/declare#declarePrefix). +- `prefix` A prefix that is declared with [declarePrefix()](../../../declare#declarePrefix). - `content` Either a key that contains a array of string values, or an array of static strings. If the `prefix` is used, this content is placed after the prefix (sometimes referred to a the 'local name'). If the `prefix` parameter is not used, the content must specify the full IRI. ### Example @@ -116,7 +116,7 @@ The elements of such collections must be represented using a single-linked list ### Parameters -- `prefix` A prefix for linked lists that is declared with [declarePrefix()](/triply-etl/declare#declarePrefix). +- `prefix` A prefix for linked lists that is declared with [declarePrefix()](../../../declare#declarePrefix). - `listOrReference` term, literal, key or string or an array of terms/literals/keys/strings ### Example: Fruit basket @@ -260,7 +260,7 @@ id:amsterdam ### See also -If the same literal is used in multiple statements, repeating the same assertion multiple times can impose a maintenance burden. In such cases, it is possible to first add the literal to the record with transformation [addLiteral()](/triply-etl/transform/ratt#function-addliteral), and refer to that one literal in multiple statements. +If the same literal is used in multiple statements, repeating the same assertion multiple times can impose a maintenance burden. In such cases, it is possible to first add the literal to the record with transformation [addLiteral()](../../../transform/ratt#function-addliteral), and refer to that one literal in multiple statements. Use assertion [literals()](#function-literals) to create multiple literals in one step. diff --git a/docs/triply-etl/cli/index.md b/docs/triply-etl/cli/index.md index c0984fd0..865466b7 100644 --- a/docs/triply-etl/cli/index.md +++ b/docs/triply-etl/cli/index.md @@ -51,7 +51,7 @@ Notice that this prevents you from using the terminal application for new comman The TriplyETL Runner allows you to run a local TriplyETL project in your terminal application. -We assume that you have a local TriplyETL project in which you can successfully run the `npx etl` command. Follow the [Getting Started instructions for TriplyETL Runner](/triply-etl/getting-started#triplyetl-runner) if this is not yet the case. +We assume that you have a local TriplyETL project in which you can successfully run the `npx etl` command. Follow the [Getting Started instructions for TriplyETL Runner](../getting-started#triplyetl-runner) if this is not yet the case. Run the following command to run the ETL pipeline: @@ -86,12 +86,19 @@ TriplyETL Runner will start processing data. Depending on the size of the data s ``` This summary includes the following information: + - **"#Error"** shows the number of errors encountered. With default settings, this number is at most 1, since the Runner will immediately stop after an error occurs. -- **"#Warning"** shows the number of warnings encountered. With default settings, this includes warnings emitted by the [SHACL Validator](/triply-etl/validate/shacl). -- **"#Info"** shows the number of informational messages. With default settings, this includes informational messages emitted by the [SHACL Validator](/triply-etl/validate/shacl). + +- **"#Warning"** shows the number of warnings encountered. With default settings, this includes warnings emitted by the [SHACL Validator](../validate/shacl). + +- **"#Info"** shows the number of informational messages. With default settings, this includes informational messages emitted by the [SHACL Validator](../validate/shacl). + - **"#Statements"** shows the number of triples or quads that was generated. This number is equal to or higher than the number of statements that is uploaded to the triple store. The reason for this is that TriplyETL processes records in parallel. If the same statement is generated for two records, the number of statements with be incremented by 2, but only 1 unique statement will be uploaded to the triple store. + - **"#Records"** shows the number of records that was processed. + - **"Started at"** shows the date and time at which the Runner started. + - **"Runtime"** shows the wall time duration of the run. ### Limit the number of records @@ -174,7 +181,7 @@ This fixes the reset issue, but also makes the output less colorful. TriplyETL Tools is a collection of small tools that can be used to run isolated tasks from your terminal application. TriplyETL Tools can be used when you are inside a TriplyETL project. -If you do not have an ETL project yet, use the [TriplyETL Generator](/triply-etl/getting-started#triplyetl-generator) first to create one. +If you do not have an ETL project yet, use the [TriplyETL Generator](../getting-started#triplyetl-generator) first to create one. The following command prints an overview of the supported tools: @@ -235,6 +242,7 @@ npx tools create-token ``` The command will ask a couple of questions in order to create the TriplyDB API Token: + - The hostname of the TriplyDB instance - The name of the token - Your TriplyDB account e-mail @@ -286,4 +294,4 @@ The command can be used as follows: $ npx tools validate -d data.trig -s model.trig ``` -See [this section](/triply-etl/validate/shacl#validation-report) to learn more about the SHACL validation report. +See [this section](../validate/shacl#validation-report) to learn more about the SHACL validation report. diff --git a/docs/triply-etl/control/index.md b/docs/triply-etl/control/index.md index e95ac928..a6e0c566 100644 --- a/docs/triply-etl/control/index.md +++ b/docs/triply-etl/control/index.md @@ -125,7 +125,7 @@ country:nl rdfs:label 'The Netherlands'. country:de rdfs:label 'Germany'. ``` -Notice that `forEach()` only works for lists whose elements are objects*. See [Iterating over lists of primitives](/triply-etl/tmp/tmp.md#list-primitive) for dealing with lists that do not contain objects. +Notice that `forEach()` only works for lists whose elements are objects*. See [Iterating over lists of primitives](../tmp/tmp/#iterating-over-lists-of-primitives) for dealing with lists that do not contain objects. The elements that `forEach()` iterates over are themselves (sub)records. This implies that all functions that work for full records also work for the (sub)records inside `forEach()`. The (sub)records inside an `forEach()` function are smaller. This allows the regular keys of the iterated-over elements to be accessed directly. @@ -138,7 +138,7 @@ In addition to these regular keys, (sub)records inside `forEach()` also contain ### Index key (`$index`) -Each (sub)record that is made available in `forEach()` contains the `$index` key. The value of this key is the index of the element in the list. This is the same index that is used to access specific elements in an list, as explained in [the section on accessing lists by index](/triply-etl/extract/formats#accessing-lists-by-index). +Each (sub)record that is made available in `forEach()` contains the `$index` key. The value of this key is the index of the element in the list. This is the same index that is used to access specific elements in an list, as explained in [the section on accessing lists by index](../extract/formats#accessing-lists-by-index). The index key is often useful for assigning a unique subject IRI to every element. diff --git a/docs/triply-etl/debug/index.md b/docs/triply-etl/debug/index.md index 94806ba4..14251787 100644 --- a/docs/triply-etl/debug/index.md +++ b/docs/triply-etl/debug/index.md @@ -104,7 +104,7 @@ limit 10 ## Function `logRecord()` -This function prints the current state of the record to standard output. The record is a generic representation of the data that is extracted from one of the data sources (see the [Record documentation page](/triply-etl/extract/record) for more information). +This function prints the current state of the record to standard output. The record is a generic representation of the data that is extracted from one of the data sources (see the [Record documentation page](../extract/record) for more information). The following snippet prints the inline JSON record to standard output: @@ -136,7 +136,7 @@ Since this prints a full overview of what is available in the data source, this ### Observe the effects of transformations -Another common use case for `logRecord()` is to observe the record at different moments in time. This is specifically useful to observe the effects of [transformation functions](/triply-etl/transform), since these are the functions that modify the record. +Another common use case for `logRecord()` is to observe the record at different moments in time. This is specifically useful to observe the effects of [transformation functions](../transform), since these are the functions that modify the record. The following snippet logs the record directly before and directly after the transformation function `split()` is called. diff --git a/docs/triply-etl/declare/index.md b/docs/triply-etl/declare/index.md index badb8df4..6eb3aa4c 100644 --- a/docs/triply-etl/declare/index.md +++ b/docs/triply-etl/declare/index.md @@ -80,7 +80,7 @@ iri(prefix.vehicle, 'id') iri(prefix.vehicle, str('123')), ``` -See assertion functions [iri()](/triply-etl/assert/ratt/term/#function-iri) and [str()](/triply-etl/assert/ratt/term#function-str) for more information. +See assertion functions [iri()](../assert/ratt/term/#function-iri) and [str()](../assert/ratt/term#function-str) for more information. @@ -309,7 +309,7 @@ triples(graph.metadata, ), ``` -See assertion function [triples()](/triply-etl/assert/ratt/statement#function-triples) for more information. +See assertion function [triples()](../assert/ratt/statement#function-triples) for more information. @@ -337,7 +337,7 @@ Or they can be used to directly assert language-tagged strings in the Internal S triple('_city', rdfs.label, literal('label', lang.fr)), ``` -See transformation function [addLiteral()](/triply-etl/transform/ratt#function-addliteral) and assertion function [literal()](/triply-etl/assert/ratt/term#function-literal) for more information. +See transformation function [addLiteral()](../transform/ratt#function-addliteral) and assertion function [literal()](../assert/ratt/term#function-literal) for more information. @@ -350,7 +350,7 @@ import { geojsonToWkt } from '@triplyetl/etl/ratt' import { epsg } from '@triplyetl/etl/vocab' ``` -Such IRIs that denote coordinate reference systems can be used in several geospatial functions, for example in transformation function [geojsonToWkt()](/triply-etl/transform/ratt#function-geojsontowkt): +Such IRIs that denote coordinate reference systems can be used in several geospatial functions, for example in transformation function [geojsonToWkt()](../transform/ratt#function-geojsontowkt): ```ts geojsonToWkt({ diff --git a/docs/triply-etl/enrich/index.md b/docs/triply-etl/enrich/index.md index b30ae1e6..9ff14e18 100644 --- a/docs/triply-etl/enrich/index.md +++ b/docs/triply-etl/enrich/index.md @@ -24,12 +24,12 @@ graph LR TriplyETL supports the following enrichment approaches: -- 4A. [**SHACL Rules**](/triply-etl/enrich/shacl) are able to apply SPARQL Ask and Construct queries to the internal store. -- 4B. [**SPARQL Update**](/triply-etl/enrich/sparql) allows linked data to be added to and deleted from the internal store. +- 4A. [**SHACL Rules**](shacl) are able to apply SPARQL Ask and Construct queries to the internal store. +- 4B. [**SPARQL Update**](sparql) allows linked data to be added to and deleted from the internal store. ## See also If you do not have linked data in your internal store yet, then first perform one of the following steps: -- **1. Extract** allows you to load linked data into your internal store directly, using the [loadRdf()](/triply-etl/extract/formats#function-loadrdf) function. -- [3. **Assert**](/triply-etl/assert) uses entries from your record to make linked data assertions into your internal store. +- **1. Extract** allows you to load linked data into your internal store directly, using the [loadRdf()](../extract/formats#function-loadrdf) function. +- [3. **Assert**](../assert) uses entries from your record to make linked data assertions into your internal store. diff --git a/docs/triply-etl/enrich/shacl/index.md b/docs/triply-etl/enrich/shacl/index.md index e7028c5b..294d63cd 100644 --- a/docs/triply-etl/enrich/shacl/index.md +++ b/docs/triply-etl/enrich/shacl/index.md @@ -8,7 +8,7 @@ SHACL Rules are applied to the linked data that is currently present in the inte The SHACL Rules engine in TriplyETL processes rules iteratively. This supports rules whose execution depends on the outcome of other rules. Triply observes that this iterative functionality is often necessary for complex production systems in which SHACL Rules are applied. -See the [enrichment step overview page](/triply-etl/enrich) for other enrichment approaches. +See the [enrichment step overview page](../) for other enrichment approaches. @@ -17,7 +17,7 @@ See the [enrichment step overview page](/triply-etl/enrich) for other enrichment SHACL Rules can be used when the following preconditions are met: 1. You have a data model that has one or more SHACL Rules. -2. You have some linked data in the internal store. If your internal store is still empty, you can read [the Assert documentation](/triply-etl/assert) on how to add linked data to the internal store. +2. You have some linked data in the internal store. If your internal store is still empty, you can read [the Assert documentation](../../assert) on how to add linked data to the internal store. The function for executing SHACL Rules is imported as follows: @@ -224,7 +224,7 @@ In this example, we start out with a data source that is not linked data: { "age": 12, "name": "peter" }, ``` -We use the [fromJson()](/triply-etl/extract/formats#extractor-fromjson) extractor and specify the source data [inline](/triply-etl/extract/types#inline-json). We use RATT assertion function [pairs()](/triply-etl/assert/ratt/statement#function-pairs) to add linked data to the internal store. +We use the [fromJson()](../../extract/formats#extractor-fromjson) extractor and specify the source data [inline](../../extract/types#inline-json). We use RATT assertion function [pairs()](../../assert/ratt/statement#function-pairs) to add linked data to the internal store. ```ts import { logQuads } from '@triplyetl/etl/debug' diff --git a/docs/triply-etl/enrich/sparql/index.md b/docs/triply-etl/enrich/sparql/index.md index 472a1a59..31585b36 100644 --- a/docs/triply-etl/enrich/sparql/index.md +++ b/docs/triply-etl/enrich/sparql/index.md @@ -4,7 +4,7 @@ SPARQL Update is a powerful feature that allows you to modify and enrich linked data in the internal store. With SPARQL Update, you can generate new linked data based on existing linked data, thereby enhancing the content of the store. -*Support for SPARQL Update is currently experimental. In the meantime, you can use [SHACL Rules](/triply-etl/enrich/shacl) to implement the Enrich Step of your pipeline.* +*Support for SPARQL Update is currently experimental. In the meantime, you can use [SHACL Rules](../shacl) to implement the Enrich Step of your pipeline.* @@ -39,7 +39,7 @@ insert data { . }`), } ``` -Debug function [logQuads()](/triply-etl/debug#function-logquads) prints the content of the internal store to standard output: +Debug function [logQuads()](../../debug#function-logquads) prints the content of the internal store to standard output: ```turtle base @@ -104,7 +104,7 @@ prefix sdo: <${prefix.sdo('').value}> delete data { sdo:children . }`), ``` -You can use the debug function [logQuads()](/triply-etl/debug#function-logquads) before and after this function call, to see the effects on the internal store. +You can use the debug function [logQuads()](../../debug#function-logquads) before and after this function call, to see the effects on the internal store. diff --git a/docs/triply-etl/extract/formats/index.md b/docs/triply-etl/extract/formats/index.md index 0ba1b25a..ad3e5a75 100644 --- a/docs/triply-etl/extract/formats/index.md +++ b/docs/triply-etl/extract/formats/index.md @@ -46,7 +46,7 @@ The following code snippet extracts records from an online CSV file, that is hos fromCsv(Source.url('https://somewhere.com/data.csv')), ``` -The following code snippet extracts records from a [TriplyDB Asset](/triply-etl/extract/types#triplydb-assets). The asset is store in the data with name `'some-data'`, under an account with name `'some-account'`. The name of the asset is `'example.csv'`: +The following code snippet extracts records from a [TriplyDB Asset](../types#triplydb-assets). The asset is store in the data with name `'some-data'`, under an account with name `'some-account'`. The name of the asset is `'example.csv'`: ```ts fromCsv( @@ -138,7 +138,7 @@ which is emitted as the following two TriplyETL records: ``` Notice that: -- All values have type `string`, including `"ID"` and `"Age"`. The value for field `"Age"` should probably be considered numeric, but the CSV format cannot express this. A TriplyETL [transformation](/triply-etl/transform) can be used to cast string values to numeric values. +- All values have type `string`, including `"ID"` and `"Age"`. The value for field `"Age"` should probably be considered numeric, but the CSV format cannot express this. A TriplyETL [transformation](../../transform) can be used to cast string values to numeric values. - The trailing space in `"D., Jane "` is omitted from the second record, since training whitespace is removed from all keys and values. - The `"Age"` key is missing from the second record, since the corresponding CSV cell contains the empty string, which is considered to denote an empty value. @@ -148,7 +148,7 @@ Notice that: JSON (JavaScript Object Notation) is a popular open standard for interchanging tree-shaped data. TriplyETL has a dedicated `fromJson()` extractor for this format. -The following code snippet connects to a JSON source that is stored as a [TriplyDB asset](/triply-etl/extract/types#triplydb-assets): +The following code snippet connects to a JSON source that is stored as a [TriplyDB asset](../types#triplydb-assets): ```ts fromJson( @@ -339,7 +339,7 @@ logRecord(), TSV or Tab-Separated Values (file name extension `.tsv`) is a popular format for tabular source data. TriplyETL has a `fromTsv()` extractor to support this format. -The following code snippet extracts records for TSV file that is stored as a [TriplyDB Asset](/triply-etl/extract/types#triplydb-assets): +The following code snippet extracts records for TSV file that is stored as a [TriplyDB Asset](../types#triplydb-assets): ```ts fromTsv( @@ -388,7 +388,8 @@ which is emitted as the following two TriplyETL records: ``` Notice that: -- All values have type `string`, including `"ID"` and `"Age"`. The value for field `"Age"` should probably be considered numeric, but the TSV format cannot express this. A TriplyETL [transformation](/triply-etl/transform) can be used to cast string values to numeric values. + +- All values have type `string`, including `"ID"` and `"Age"`. The value for field `"Age"` should probably be considered numeric, but the TSV format cannot express this. A TriplyETL [transformation](../../transform) can be used to cast string values to numeric values. - The trailing space in `"D., Jane "` is omitted from the second record, since training whitespace is removed from all keys and values. - The `"Age"` key is missing from the second record, since the corresponding TSV cell contains the empty string, which is considered to denote an empty value. @@ -398,7 +399,7 @@ Notice that: XLSX or Office Open XML Workbook (file name extension `.xlsx`) is a popular format for storing tabular source data. This is the standard file format for Microsoft Excel. TriplyETL has a dedicated `fromXlsx()` extractor for such sources. -The following code snippet shows how a [TriplyDB assets](/triply-etl/extract/types#triplydb-assets) is used to process records from an XLSX source: +The following code snippet shows how a [TriplyDB assets](../types#triplydb-assets) is used to process records from an XLSX source: ```ts fromXlsx( @@ -467,7 +468,7 @@ Notice the following: For every record emitted by the `fromXlsx()` extractor. the `$sheetName` special key contains the name of the Excel sheet from which that record originates. The presence of the sheet name allows the TriplyETL configuration to be adjusted for different sheet. -For example, an Excel spreadsheet may contain a 'companies' sheet and a 'persons' sheet. The name of the sheet may be used to determine which class should be asserted. The following snippet uses transformation [translateAll()](/triply-etl/transform/ratt#function-translateall) to map sheet names to class IRIs: +For example, an Excel spreadsheet may contain a 'companies' sheet and a 'persons' sheet. The name of the sheet may be used to determine which class should be asserted. The following snippet uses transformation [translateAll()](../../transform/ratt#function-translateall) to map sheet names to class IRIs: ```ts fromXlsx(Source.file('example.xlsx')), @@ -537,7 +538,7 @@ The following snippet shows how to load a local shapefile: ```ts fromShapeFile(Source.file("example.shp")) ``` -If the shapefile is stored as the asset for the dataaset in the TriplyDB instance, you can load it from there: +If the shapefile is stored as the asset for the dataset in the TriplyDB instance, you can load it from there: ```ts fromShapeFile( Source.TriplyDb.asset( @@ -590,7 +591,7 @@ For example, the first record in file `nl_1km.shp` (downloaded from [European En } ``` -You can now work with shapefiles and use [geojsonToWkt()](/triply-etl/transform/ratt/#function-geojsontowkt) transfrom function to create geomerty. +You can now work with shapefiles and use [geojsonToWkt()](../../transform/ratt/#function-geojsontowkt) transfrom function to create geomerty. @@ -598,7 +599,7 @@ You can now work with shapefiles and use [geojsonToWkt()](/triply-etl/transform/ XML or Extensible Markup Language is a popular open format for tree-shaped source data. -The following snippets connects to an XML file that is made available as a [TriplyDB asset](/triply-etl/extract/types#triplydb-assets): +The following snippets connects to an XML file that is made available as a [TriplyDB asset](../types#triplydb-assets): ```ts fromXml( diff --git a/docs/triply-etl/extract/index.md b/docs/triply-etl/extract/index.md index 7df723e4..0f52115d 100644 --- a/docs/triply-etl/extract/index.md +++ b/docs/triply-etl/extract/index.md @@ -24,13 +24,13 @@ graph LR The following pages cover the Extract step in detail: -- 1A. [**Data Formats**](/triply-etl/extract/formats) gives an overview of the data formats that are supported by TriplyETL. -- 1B. [**Source Types**](/triply-etl/extract/types) given an overview of the source types that are supported by TriplyETL -- 1C. [**Record**](/triply-etl/extract/record) documents the basic structure of every record in TriplyETL. +- 1A. [**Data Formats**](formats) gives an overview of the data formats that are supported by TriplyETL. +- 1B. [**Source Types**](types) given an overview of the source types that are supported by TriplyETL +- 1C. [**Record**](record) documents the basic structure of every record in TriplyETL. ## Next steps The Extract step results in a stream of records. The basic structure of every Record in TriplyETL is the same. It does not matter which data format or which source type is used. Once a stream of Records is generated, the following steps document how data from those records can be used: -- [2. **Transform**](/triply-etl/transform) are applied to the Record to change its contents. -- [3. **Assert**](/triply-etl/assert) use data from the Record to generate linked data in the Internal Store. +- [2. **Transform**](../transform) are applied to the Record to change its contents. +- [3. **Assert**](../assert) use data from the Record to generate linked data in the Internal Store. diff --git a/docs/triply-etl/extract/record/index.md b/docs/triply-etl/extract/record/index.md index 80bb284d..0a134638 100644 --- a/docs/triply-etl/extract/record/index.md +++ b/docs/triply-etl/extract/record/index.md @@ -8,7 +8,7 @@ When a TriplyETL is connected to one of more data sources, a stream of **Records ## The generic Record -We illustrate the representation of the generic Record with the following code snippet. This snippet uses extractor [fromJson()](/triply-etl/extract/formats#extractor-fromjson) to extract data from [inline JSON](/triply-etl/extract/types#inline-json) source data: +We illustrate the representation of the generic Record with the following code snippet. This snippet uses extractor [fromJson()](../formats#extractor-fromjson) to extract data from [inline JSON](../types#inline-json) source data: ```ts import { Etl, fromJson, logRecord } from '@triplyetl/etl/generic' @@ -25,7 +25,7 @@ export default async function (): Promise { } ``` -Debug function [logRecord()](/triply-etl/debug#function-logrecord) prints the current record to standard output. When this pipeline is run, the two inline records are printed as follows: +Debug function [logRecord()](../../debug#function-logrecord) prints the current record to standard output. When this pipeline is run, the two inline records are printed as follows: ```json { @@ -60,7 +60,7 @@ Now suppose that we change the source system. We no longer use inline JSON, but ``` -Let us change the TriplyETL script to use extractor [fromXml()](/triply-etl/extract/formats#extractor-fromxml) and the [local file](/triply-etl/extract/types#local-files) source type: +Let us change the TriplyETL script to use extractor [fromXml()](../formats#extractor-fromxml) and the [local file](../types#local-files) source type: ```ts import { Etl, fromXml, logRecord } from '@triplyetl/etl/generic' @@ -103,7 +103,7 @@ The `loadRecords()` function allows us to run a sub ETL and store its records to The function expects two arguments and can be run with the following snippet: - - `fromSrc` - The Source to load the data from. The list of available extractors can be seen in [Data Formats overview page](/triply-etl/extract/formats/#overview). + - `fromSrc` - The Source to load the data from. The list of available extractors can be seen in [Data Formats overview page](../formats/#overview). - `key` - A new key where the records are stored. ```ts @@ -225,19 +225,19 @@ When you are debugging the configuration of a TriplyETL pipeline, it is sometime whenEqual('$recordId', 908, logRecord()), ``` -Do note that it is generally better to run the TriplyETL for a specific record using the `--from-record-id 908 --head 1` command line flags (see [CLI](/triply-etl/cli)). +Do note that it is generally better to run the TriplyETL for a specific record using the `--from-record-id 908 --head 1` command line flags (see [CLI](../../cli)). ## Special key `$environment` -The TriplyETL record contains special key `$environment`. Its value denotes the DTAP environment that the pipeline is currently running in. This is one of the followin values: "Development", "Test", "Acceptance", or "Production". +The TriplyETL record contains special key `$environment`. Its value denotes the DTAP environment that the pipeline is currently running in. This is one of the following values: "Development", "Test", "Acceptance", or "Production". ## Special key `$sheetName` -The special key `$sheetName` only occurs in records that original from data source that use the Microsoft Excel format. In such records, this special key contains the name of the sheet from which the record originats. +The special key `$sheetName` only occurs in records that original from data source that use the Microsoft Excel format. In such records, this special key contains the name of the sheet from which the record originates. -See [the documentation for the Microsoft Excel format](/triply-etl/extract/formats/#extractor-fromxlsx) for more information about this special key. +See [the documentation for the Microsoft Excel format](../formats/#extractor-fromxlsx) for more information about this special key. diff --git a/docs/triply-etl/extract/types/index.md b/docs/triply-etl/extract/types/index.md index ae3e1c78..8d095a9f 100644 --- a/docs/triply-etl/extract/types/index.md +++ b/docs/triply-etl/extract/types/index.md @@ -20,7 +20,7 @@ This page documents the different data source types that can be used in TriplyET ## Local files -The following code snippet extracts records from a local file that uses the [JSON format](/triply-etl/extract/formats#extractor-fromjson): +The following code snippet extracts records from a local file that uses the [JSON format](../formats#extractor-fromjson): ```ts fromJson(Source.file('./static/example.json')), @@ -278,15 +278,15 @@ Saved SPARQL queries in TriplyDB can be used as data sources. SPARQL queries are | Query form | Source extractor | | --- | --- | -| [SPARQL Ask](#sparql-ask-queries) | [fromJson()](/triply-etl/extract/formats#extractor-fromjson), [fromXml()](/triply-etl/extract/formats#extractor-fromxml) | -| [SPARQL Construct](#sparql-construct-and-describe-queries) | [loadRdf()](/triply-etl/extract/formats#function-loadrdf) | -| [SPARQL Describe](#sparql-construct-and-describe-queries) | [loadRdf()](/triply-etl/extract/formats#function-loadrdf) | -| [SPARQL Select](#sparql-select-queries) | [fromCsv()](/triply-etl/extract/formats#extractor-fromcsv), [fromJson()](/triply-etl/extract/formats#extractor-fromjson), [fromTsv()](/triply-etl/extract/formats#extractor-fromtsv), [fromXml()](/triply-etl/extract/formats#extractor-fromxml) | +| [SPARQL Ask](#sparql-ask-queries) | [fromJson()](../formats#extractor-fromjson), [fromXml()](../formats#extractor-fromxml) | +| [SPARQL Construct](#sparql-construct-and-describe-queries) | [loadRdf()](../formats#function-loadrdf) | +| [SPARQL Describe](#sparql-construct-and-describe-queries) | [loadRdf()](../formats#function-loadrdf) | +| [SPARQL Select](#sparql-select-queries) | [fromCsv()](../formats#extractor-fromcsv), [fromJson()](../formats#extractor-fromjson), [fromTsv()](../formats#extractor-fromtsv), [fromXml()](../formats#extractor-fromxml) | ### SPARQL Ask queries -SPARQL Ask queries can return data in either the JSON or the XML format. This allows them to be processed with the extractors [fromCsv()](/triply-etl/extract/formats#extractor-fromcsv) and [fromXml()](/triply-etl/extract/formats#extractor-fromxml). +SPARQL Ask queries can return data in either the JSON or the XML format. This allows them to be processed with the extractors [fromCsv()](../formats#extractor-fromcsv) and [fromXml()](../formats#extractor-fromxml). The following code snippet connects to the XML results of a SPARQL Ask query in TriplyDB: @@ -297,7 +297,7 @@ fromXml(Source.TriplyDb.query('my-account', 'my-ask-query')), ### SPARQL Construct and Describe queries -SPARQL Construct and Describe queries return data in the RDF format. This allows them to be used with function [loadRdf()](/triply-etl/extract/formats#function-loadrdf). The following snippet loads the results of a SPARQL query into the internal RDF store of TriplyETL: +SPARQL Construct and Describe queries return data in the RDF format. This allows them to be used with function [loadRdf()](../formats#function-loadrdf). The following snippet loads the results of a SPARQL query into the internal RDF store of TriplyETL: ```ts loadRdf(Source.TriplyDb.query('my-account', 'my-construct-query')), @@ -306,7 +306,7 @@ loadRdf(Source.TriplyDb.query('my-account', 'my-construct-query')), ### SPARQL Select queries -SPARQL Select queries return data in either the CSV, JSON, TSV, or XML format. This allows them to be used with the following four extractors: [fromCsv()](/triply-etl/extract/formats#extractor-fromcsv), [fromJson()](/triply-etl/extract/formats#extractor-fromjson), [fromTsv()](/triply-etl/extract/formats#extractor-fromtsv), and [fromXml()](/triply-etl/extract/formats#extractor-fromxml). +SPARQL Select queries return data in either the CSV, JSON, TSV, or XML format. This allows them to be used with the following four extractors: [fromCsv()](../formats#extractor-fromcsv), [fromJson()](../formats#extractor-fromjson), [fromTsv()](../formats#extractor-fromtsv), and [fromXml()](../formats#extractor-fromxml). The following code snippet connects to the table returned by a SPARQL Select query in TriplyDB: @@ -404,7 +404,7 @@ loadRdf( ) ``` -This snippet assumes that the graph names have been declared (see [Delcarations](/triply-etl/declare#graph-name-declarations)). +This snippet assumes that the graph names have been declared (see [Delcarations](../../declare#graph-name-declarations)). ### TriplyDB instance @@ -456,7 +456,7 @@ Notice that we must specify the RDF serialization format that we use. This is ne | XHTML | `'application/xhtml+xml'` | | XML | `'application/xml'` | -The following example makes RDF source data available to the SHACL [validate()](/triply-etl/validate/shacl) function: +The following example makes RDF source data available to the SHACL [validate()](../../validate/shacl) function: ```ts import { Source } from '@triplyetl/etl/generic' diff --git a/docs/triply-etl/getting-started/index.md b/docs/triply-etl/getting-started/index.md index ab235b67..b3123b83 100644 --- a/docs/triply-etl/getting-started/index.md +++ b/docs/triply-etl/getting-started/index.md @@ -46,7 +46,7 @@ git config --global user.name "Ada Lovelace"
Find a terminal application
-

You must use a terminal application in order to run commands from the TriplyETL CLI. Here are some examples of terminal applications on different operating systems:

+

You must use a terminal application in order to run commands from the TriplyETL CLI. Here are some examples of terminal applications on different operating systems:

On Windows
Most Windows versions come with some version of PowerShell preinstalled. You can also follow these instructions by Microsoft to update to the latest version of PowerShell.
@@ -64,7 +64,7 @@ git config --global user.name "Ada Lovelace" The TriplyETL Generator allows you to create new ETL projects in your terminal application. -If a TriplyETL project already exists, use the [TriplyETL Runner](/triply-etl/cli/#triplyetl-runner) instead. +If a TriplyETL project already exists, use the [TriplyETL Runner](../cli/#triplyetl-runner) instead. In order to use TriplyETL Generator, you must have: @@ -122,7 +122,7 @@ npx triply-etl-generator cd my-etl ``` -4. You can now use the [TriplyETL Runner](/triply-etl/cli#triplyetl-runner) to run the ETL: +4. You can now use the [TriplyETL Runner](../cli#triplyetl-runner) to run the ETL: ```sh npx etl @@ -178,7 +178,7 @@ npx etl At this point, you should see a first TriplyETL process in your terminal application. If this is not the case, please contact [support@triply.cc](mailto:support@triply.cc) to help you out. -Visit the [TriplyETL CLI documentation](/triply-etl/cli#triplyetl-runner) to learn more about how you can use the TriplyETL Runner. Visit the [TriplyETL CI/CD documentation](/triply-etl/maintenance#configure-cicd) to learn more about how you can automate TriplyETL runs. +Visit the [TriplyETL CLI documentation](../cli#triplyetl-runner) to learn more about how you can use the TriplyETL Runner. Visit the [TriplyETL CI/CD documentation](../maintenance#configure-cicd) to learn more about how you can automate TriplyETL runs. diff --git a/docs/triply-etl/maintenance/index.md b/docs/triply-etl/maintenance/index.md index 5c8f2c52..44bcd66f 100644 --- a/docs/triply-etl/maintenance/index.md +++ b/docs/triply-etl/maintenance/index.md @@ -56,9 +56,9 @@ TriplyETL uses the Semantic Versioning approach: `{major}.{minor}.{patch}` The i
Patch update
Only the {patch} number has increased. This means that one or more bugs have been fixed in a backward compatible manner. You should always be able to perform a patch update without having to make any changes to your configuration.
Minor update
-
The {minor} number has increased, but the {major} number is still the same. This means that new functionality was added in a backward compatible manner. You should always be able to perform a minor update without having to make any changes to your configuration. But you may want to check the changelog to see which new functionalities were added.
+
The {minor} number has increased, but the {major} number is still the same. This means that new functionality was added in a backward compatible manner. You should always be able to perform a minor update without having to make any changes to your configuration. But you may want to check the changelog to see which new functionalities were added.
Major update
-
The {major} number has increased. This means that there are incompatible changes. This means that features may have been removed, or existing features may have changed. In such cases, changes to your configuration are almost certainly necessary, and may take some time to implement. Any changes you need to make are described in the changelog.
+
The {major} number has increased. This means that there are incompatible changes. This means that features may have been removed, or existing features may have changed. In such cases, changes to your configuration are almost certainly necessary, and may take some time to implement. Any changes you need to make are described in the changelog.
### Perform the update @@ -101,7 +101,7 @@ TriplyETL pipelines can be configured to run automatically in any CI/CD environm ### CI/CD configuration file -The [TriplyETL Generator](/triply-etl/getting-started#triplyetl-generator) creates a basic configuration file for running TriplyETL in GitLab CI/CD. The configuration file is called `.gitlab-ci.yml`. +The [TriplyETL Generator](../getting-started#triplyetl-generator) creates a basic configuration file for running TriplyETL in GitLab CI/CD. The configuration file is called `.gitlab-ci.yml`. The configuration contains a list of stages: diff --git a/docs/triply-etl/publish/index.md b/docs/triply-etl/publish/index.md index 26c0d08c..8edd8f16 100644 --- a/docs/triply-etl/publish/index.md +++ b/docs/triply-etl/publish/index.md @@ -150,7 +150,7 @@ The function destination expects that source data is linked data. Copying a sour ## Using TriplyDB.js in TriplyETL -All operations that can be performed in a TriplyDB instance can be automated with classes and methods in the [TriplyDB.js](/triplydb-js) library. This library is also used by TriplyETL in the background to implement many of the TriplyETL functionalities. +All operations that can be performed in a TriplyDB instance can be automated with classes and methods in the [TriplyDB.js](../../triplydb-js) library. This library is also used by TriplyETL in the background to implement many of the TriplyETL functionalities. Sometimes it is useful to use classes and methods in TriplyDB.js directly. This is done in the following way: @@ -162,7 +162,7 @@ const etl = new Etl() console.log((await etl.triplyDb.getInfo()).name) ``` -The above example prints the name of the TriplyDB instance. But any other [TriplyDB.js](/triplydb-js) operations can be performed. For example, the user of the current API Token can change their avatar image in TriplyDB: +The above example prints the name of the TriplyDB instance. But any other [TriplyDB.js](../../triplydb-js) operations can be performed. For example, the user of the current API Token can change their avatar image in TriplyDB: ```ts @@ -220,7 +220,7 @@ You can also set the `ENV` variable in the GitLab CI/CD environment. This allows ## Upload prefix declarations -At the end of a TriplyETL script, it is common to upload the [prefix declarations](/triply-etl/declare#declarePrefix) that are configured for that pipeline. +At the end of a TriplyETL script, it is common to upload the [prefix declarations](../declare#declarePrefix) that are configured for that pipeline. This is often done directly before or after graphs are uploaded (function [toTriplyDb()](#remote-data-destinations)): diff --git a/docs/triply-etl/tmp/automation.md b/docs/triply-etl/tmp/automation.md index 54a81a2e..699c2c31 100644 --- a/docs/triply-etl/tmp/automation.md +++ b/docs/triply-etl/tmp/automation.md @@ -18,4 +18,4 @@ TriplyETL runs within a Gitlab CI environment ([Figure 1](#figure-1)). The special key `$environment` denotes the DTAP environment in which the TriplyETL pipeline is running. This allows special actions to be performed based on whether the pipeline runs in `"Debug"`, `"Test"`, `"Acceptance"`, or `"Production"` mode. -See the [DTAP documentation](/triply-etl/dtap) for more information. +See the [DTAP documentation](../../publish/#setting-up-acceptanceproduction-runs-dtap) for more information. diff --git a/docs/triply-etl/tmp/static-dynamic-statements.md b/docs/triply-etl/tmp/static-dynamic-statements.md index f20728bd..09c23a99 100644 --- a/docs/triply-etl/tmp/static-dynamic-statements.md +++ b/docs/triply-etl/tmp/static-dynamic-statements.md @@ -13,7 +13,7 @@ We use the following Record as an example: | Germany | 83190556 | | Netherlands | 17650200 | -We start with creating the prefix and term declarations (see the [Declare](/triply-etl/declare) documentation for more information): +We start with creating the prefix and term declarations (see the [Declare](../../declare) documentation for more information): ```ts const base = declarePrefix('https://triplydb.com/Triply/example/') diff --git a/docs/triply-etl/transform/index.md b/docs/triply-etl/transform/index.md index 099e90e2..4a71310a 100644 --- a/docs/triply-etl/transform/index.md +++ b/docs/triply-etl/transform/index.md @@ -20,7 +20,7 @@ graph LR tdb[(Triple Store)] ``` -If you do not have a stream of records yet, read the documentation for the [**Extract** step](/triply-etl/extract) first. +If you do not have a stream of records yet, read the documentation for the [**Extract** step](../extract) first. Once you have a stream of records, the following transformations are typically needed: - Values need to be mapped onto a prepared list of IRIs or literals (e.g. from country names to country-denoting IRIs). @@ -31,8 +31,8 @@ Once you have a stream of records, the following transformations are typically n TriplyETL supports the following transformation approaches: -- 2A. [**RATT**](/triply-etl/transform/ratt) transformations are a set of commonly used transformation functions that are developed and maintained by Triply. -- 2B. [**TypeScript**](/triply-etl/transform/typescript) can be used to write new customer transformations. +- 2A. [**RATT**](ratt) transformations are a set of commonly used transformation functions that are developed and maintained by Triply. +- 2B. [**TypeScript**](typescript) can be used to write new customer transformations. @@ -40,4 +40,4 @@ TriplyETL supports the following transformation approaches: The Transform step results in a cleaned and enriched record. The following link documents how you can use the record to make linked data assertions: -- [3. **Assert**](/triply-etl/assert) uses data from the Record to generate linked data in the Internal Store. +- [3. **Assert**](../assert) uses data from the Record to generate linked data in the Internal Store. diff --git a/docs/triply-etl/transform/ratt/index.md b/docs/triply-etl/transform/ratt/index.md index 316e1cb4..ca7ff8e5 100644 --- a/docs/triply-etl/transform/ratt/index.md +++ b/docs/triply-etl/transform/ratt/index.md @@ -4,7 +4,7 @@ RATT transformations are a core set of functions that are commonly used to change the content of TriplyETL Records. -RATT transformations started out as [TypeScript transformations](/triply-etl/transform/typescript) that turned out to be useful in a wide variety of TriplyETL pipelines. Triply maintains this core set of transformation functions to allow new ETLs to make use of off-the-shelf functionality that has proven useful in the past. +RATT transformations started out as [TypeScript transformations](../typescript) that turned out to be useful in a wide variety of TriplyETL pipelines. Triply maintains this core set of transformation functions to allow new ETLs to make use of off-the-shelf functionality that has proven useful in the past. ## Overview @@ -47,7 +47,7 @@ Creates an IRI based on the specified IRI prefix and the hash calculated over th ### Parameters - `prefix` An IRI, or a key that contains an IRI value. -- `content` A key that contains a string value, or a string value specified with function [str()](/triply-etl/assert/ratt#function-str). +- `content` A key that contains a string value, or a string value specified with function `str()`. - `key` A new key where the created hashed IRI is stored. ### Use cases @@ -211,7 +211,7 @@ This transformation can be used in the following two ways: ### See also -If the created IRI is used exactly once, it is often better to use inline function [iri()](/triply-etl/assert/ratt/term#function-iri) instead. +If the created IRI is used exactly once, it is often better to use inline function [iri()](../../assert/ratt/term#function-iri) instead. ### Example: Prefix declaration and local name @@ -242,7 +242,7 @@ graph LR johndoe(person:johndoe):::data ``` -The following snippet makes the same assertion, but uses assertion [iri()](/triply-etl/assert/ratt/term#function-iri) instead of transformation `addIri()`: +The following snippet makes the same assertion, but uses assertion [iri()](../../assert/ratt/term#function-iri) instead of transformation `addIri()`: ```ts triple(iri(prefix.person, 'username'), a, sdo.Person), @@ -270,7 +270,7 @@ graph LR johndoe(https://example.com/id/person/johndoe):::data ``` -The following snippet uses assertion [iri()](/triply-etl/assert/ratt/term#function-iri) instead of transformation `addIri()`: +The following snippet uses assertion [iri()](../../assert/ratt/term#function-iri) instead of transformation `addIri()`: ```ts triple(iri('https://example.com/id/person/johndoe'), a, sdo.Person), @@ -292,19 +292,19 @@ This transformation can be used in the following 3 ways: This transformation is typically used when: -- The same literal occurs in two or more statement assertions (function [triple()](/triply-etl/assert/ratt/statement#function-triple) or [quad()](/triply-etl/assert/ratt/statement#function-quad)). This avoids having to specify the same literal multiple times using function [literal()](/triply-etl/assert/ratt/term#function-literal). +- The same literal occurs in two or more statement assertions (function [triple()](../../assert/ratt/statement#function-triple) or [quad()](../../assert/ratt/statement#function-quad)). This avoids having to specify the same literal multiple times using function [literal()](../../assert/ratt/term#function-literal). - The datatype or language tag is derived from the source data record. ### Parameters -- `content` A key that contains a string value, or a string specified with function [str()](/triply-etl/assert/ratt/term#function-str). +- `content` A key that contains a string value, or a string specified with function `str()`. - `datatype` Optionally, a key that stores an IRI or a static IRI. -- `languageTag` Optionally, a language tag from the [`lang`](/triply-etl/declare#language-declarations) object, or a key that stores such a language tag. +- `languageTag` Optionally, a language tag from the [`lang`](../../declare#language-declarations) object, or a key that stores such a language tag. - `key` A new key where the created literal is stored. ### See also -If the created literal is used exactly once, it is often better to use the inline function [literal()](/triply-etl/assert/ratt/term#function-literal) instead. +If the created literal is used exactly once, it is often better to use the inline function [literal()](../../assert/ratt/term#function-literal) instead. ### Example: Typed literal @@ -326,7 +326,7 @@ This makes the following linked data assertion: book:123 sdo:dateCreated '2022-30-01'^^xsd:date. ``` -Notice that the same linked data could have been asserted with the following use the the [literal} assertion middleware: +Notice that the same linked data could have been asserted with the following use the the [literal()](../../assert/ratt/term#function-literal) assertion middleware: ```ts fromJson([{ id: '123', date: '2022-01-30' }]), @@ -381,7 +381,7 @@ This results in the following linked data assertion: city:London skos:prefLabel 'London'@en-gb. ``` -Notice that the same linked data could have been asserted with the following use the the [literal} assertion middleware: +Notice that the same linked data could have been asserted with the following use the the [literal()](../../assert/ratt/term#function-literal) assertion middleware: ```ts fromJson([{ name: 'London' }]), @@ -462,7 +462,7 @@ The snippet includes the prefix declarations to illustrate that the path of the const base = 'https://example.com/' const prefix = { feature: declarePrefix(base('id/feature/')), - skolem: declarePrefix(base('.well-known/genid/'), + skolem: declarePrefix(base('.well-known/genid/')), } // Etc @@ -655,7 +655,7 @@ An optionally specified separator is placed in between every two consecutive str ### Parameters -- `content` An array of key that contain a string and/or strings specified with assertion [str()](/triply-etl/assert/ratt#function-str). +- `content` An array of key that contain a string and/or strings specified with assertion [str()](../../assert/ratt#function-str). - `separator` Optionally, the string that is places between every two consecutive string values. - `key` A new key where the concatenated string is stored. @@ -809,7 +809,7 @@ Transforms GeoJSON objects to their corresponding Well-Known Text (WKT) serializ ### Parameters - `content` A key that stores a GeoJSON object. -- `crs` Optionally, an IRI that denotes a Coordinate Reference System (CRS). You can use IRIs from the [`epsg`](/triply-etl/declare#geospatial-declarations) object. If absent, uses [https://epsg.io/4326](EPSG:4326/WGS84) as the CRS. +- `crs` Optionally, an IRI that denotes a Coordinate Reference System (CRS). You can use IRIs from the [`epsg`](../../declare#geospatial-declarations) object. If absent, uses [https://epsg.io/4326](EPSG:4326/WGS84) as the CRS. - `key` A new key where the WKT serialization string is stored ### GeoJSON and Well-Known Text (WKT) @@ -982,7 +982,7 @@ Older data formats sometimes use uppercase letters for header names or codes. Th ### Example -The following snppet starts out with header values that use uppercase characters exclusively. The `lowerCase` transformation is used to create lowercase names that can be used to create property IRIs. +The following snippet starts out with header values that use uppercase characters exclusively. The `lowerCase` transformation is used to create lowercase names that can be used to create property IRIs. ```ts fromJson([ @@ -1155,7 +1155,7 @@ Performs a regular expression replacement to the given input string, and stores ### Parameters -- `content` A key that contains a string value, or a static string specified with assertion [str()](/triply-etl/assert/ratt/term#function-str). +- `content` A key that contains a string value, or a static string specified with assertion [str()](../../assert/ratt/term#function-str). - `from` A [JavaScript Regular Expression](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions). - `to` Optionally, a string that replaces potential matches of the Regular Expression (`from`). Use `$1`, `$2`, etc. to insert matches. If absent, the empty string is used. - `key` A new key where the result of the replacement is stored. @@ -1205,14 +1205,14 @@ This transformation removes any trailing whitespace that remains after the strin ### Use cases The transformation is used when: -- Tablular source data encodes multiple values inside singular cells. (Such concatenated storage inside cells is a data quality issue, because the table format cannot guarantee that the separator character does not (accidentally) occur inside individual values inside a cell. Tree-shaped source formats are able to store multiple values for the same key reliably, e.g. JSON and XML.) +- Tabular source data encodes multiple values inside singular cells. (Such concatenated storage inside cells is a data quality issue, because the table format cannot guarantee that the separator character does not (accidentally) occur inside individual values inside a cell. Tree-shaped source formats are able to store multiple values for the same key reliably, e.g. JSON and XML.) - Source data contains complex string values that can be decomposed into stand-alone components with distinct meaning. ### Parameters -- `content` A key that stores a string, or a string specified with assertion [str()](/triply-etl/assert/ratt#str). +- `content` A key that stores a string, or a string specified with assertion [str()](../../assert/ratt#str). - `separator` A string or a regular expression that is used to separate the content. -- `key` A new key where the array of splitted strings is stored. +- `key` A new key where the array of split strings is stored. ### Example: Multiple values in singular table cells @@ -1317,7 +1317,7 @@ result in a new key. ### Parameters -- `content` A key that stores a string value, or a string specified with assertion [str()](/triply-etl/assert/ratt/term#function-str). +- `content` A key that stores a string value, or a string specified with assertion [str()](../../assert/ratt/term#function-str). - `start` The index of the first character that is included in the substring. The first character has index 0. - `end` Optionally, the index of the first character that is excluded from the substring. If absent, the substring ends at the end of the source string. - `key` The new key in which the substring is stored. @@ -1456,7 +1456,7 @@ This transformation is used when string values must be mapped onto literals with The datatype IRIs that could apply are specified in a list. The specified datatype IRIs are tried out from left to right. The first datatype IRI that results in a valid literal is chosen. -- `content` A key that contains a string value, or a string value specified with assertion [str()](/triply-etl/assert/ratt/term#function-str). +- `content` A key that contains a string value, or a string value specified with assertion [str()](../../assert/ratt/term#function-str). - `datatypes` An array of two or more datatype IRIs. - `key` A new key where the created literal is stored. @@ -1486,7 +1486,7 @@ If we do not want to emit errors for string values that cannot be satisfy any of ### See also -You only need to use `tryLiteral()` if the datatype IRI varies from record to record. If the datatype IRI is the same for every record, then the regular assertion function [literal()](/triply-etl/assert/ratt/term#function-literal) should be used instead. +You only need to use `tryLiteral()` if the datatype IRI varies from record to record. If the datatype IRI is the same for every record, then the regular assertion function [literal()](../../assert/ratt/term#function-literal) should be used instead. diff --git a/docs/triply-etl/transform/typescript/index.md b/docs/triply-etl/transform/typescript/index.md index 109d0e29..6d3fef6f 100644 --- a/docs/triply-etl/transform/typescript/index.md +++ b/docs/triply-etl/transform/typescript/index.md @@ -7,7 +7,7 @@ path: "/docs/triply-etl/transform/typescript" # TypeScript -The vast majority of ETLs can be written with the core set of [RATT Transformations](/triply-etl/transform/ratt). But sometimes a custom transformation is necessary that cannot be handled by this core set. For such circumstances, TriplyETL allows a custom TypeScript function to be written. +The vast majority of ETLs can be written with the core set of [RATT Transformations](../ratt). But sometimes a custom transformation is necessary that cannot be handled by this core set. For such circumstances, TriplyETL allows a custom TypeScript function to be written. Notice that the use of a custom TypeScript function should be somewhat uncommon. The vast majority of real-world transformations should be supported by the core set of RATT Transformations. @@ -64,7 +64,7 @@ This function emits an error if `NEW_KEY` already exists in the current Record. Notice that it is bad practice to use `custom.add()` for adding a new entry that is based on exactly one existing entry. In such cases, the use of function `custom.copy()` ### Example: Numeric calculations -Suppose the source data contains a numeric balance and a numeirc rate. We can use function `custom.add()` to calculate the interest and store it in a new key: +Suppose the source data contains a numeric balance and a numeric rate. We can use function `custom.add()` to calculate the interest and store it in a new key: ```ts import { Etl, fromJson } from '@triplyetl/etl/generic' @@ -176,7 +176,7 @@ Notice that the values for the `balance` keys were changed. ### Example: Cast numeric data -Some source data formats are unable to represent numeric data. A good example are the [CSV](/triply-etl/extract/formats#extractor-fromcsv) and [TSV](/triply-etl/extract/formats#extractor-fromtsv) formats, where every cell value is represented as a string. +Some source data formats are unable to represent numeric data. A good example are the [CSV](../../extract/formats#extractor-fromcsv) and [TSV](../../extract/formats#extractor-fromtsv) formats, where every cell value is represented as a string. If such a source data format that cannot represent numeric data is used, it is often useful to explicitly cast string values to numbers. @@ -210,12 +210,12 @@ After `custom.change()` has been applied, the record looks as follows: | Italy | null | | Netherlands | 17650200 | -Notice that strings that encode a number are correctly transformed, and non-empty strings that do not encode a number are transformed to `null`. Most of the time, this is the behaviour that you want in a linked data pipeline. +Notice that strings that encode a number are correctly transformed, and non-empty strings that do not encode a number are transformed to `null`. Most of the time, this is the behavior that you want in a linked data pipeline. -Also notice that the empty string is cast to the number zero. Most of the time, this is *not* what you want. If you want to prevent this transformation from happening, and you almost certainly do, you must process the source data conditionally, using [control structures](/triply-etl/control). +Also notice that the empty string is cast to the number zero. Most of the time, this is *not* what you want. If you want to prevent this transformation from happening, and you almost certainly do, you must process the source data conditionally, using [control structures](../../control). ### Example: Variant type diff --git a/docs/triply-etl/validate/index.md b/docs/triply-etl/validate/index.md index 26cc2fc6..0f359d20 100644 --- a/docs/triply-etl/validate/index.md +++ b/docs/triply-etl/validate/index.md @@ -31,7 +31,7 @@ Triply believes that *every* ETL should include the Validate step to ensure that TriplyETL supports two approaches for validating linked data: -- 5A. [**Graph Comparison**](/triply-etl/validate/graph-comparison) uses one or more manually created 'gold records'. Graph comparison ensures that these records are transformed in the intended way by the ETL pipeline. -- 5B. [**SHACL Validation**](/triply-etl/validate/shacl) uses a generic data model. SHACL Validation ensures that each individual record is processed in accordance with the generic data model. +- 5A. [**Graph Comparison**](../../triply-etl/validate/graph-comparison) uses one or more manually created 'gold records'. Graph comparison ensures that these records are transformed in the intended way by the ETL pipeline. +- 5B. [**SHACL Validation**](../../triply-etl/validate/shacl) uses a generic data model. SHACL Validation ensures that each individual record is processed in accordance with the generic data model. Notice that it is possible to combine these two approaches in the same ETL pipeline: you can use graph comparison to test for specific conformities, and use SHACL to test for generic conformities. diff --git a/docs/triply-etl/validate/shacl/index.md b/docs/triply-etl/validate/shacl/index.md index df18c754..5f0ac940 100644 --- a/docs/triply-etl/validate/shacl/index.md +++ b/docs/triply-etl/validate/shacl/index.md @@ -15,7 +15,7 @@ This page documents how SHACL is used to validate linked data in the internal st SHACL Validation can be used when the following preconditions are met: 1. A data model that uses SHACL constraints. -2. Some data must be asserted in the internal store. If your internal store is still empty, you can read [the Assert documentation](/triply-etl/assert) on how to add assertions to that store. +2. Some data must be asserted in the internal store. If your internal store is still empty, you can read [the Assert documentation](../../assert) on how to add assertions to that store. The function for running SHACL Validation is imported as follows: @@ -86,7 +86,7 @@ In our example we are using the following source data that records the age of a } ``` -In our example the data source is [inline JSON](/triply-etl/extract/types#inline-json), but notice that any source format could have been used: +In our example the data source is [inline JSON](../../extract/formats/#extractor-fromjson), but notice that any source format could have been used: ```ts fromJson([{ age: 'twelve', id: '1' }]), @@ -124,7 +124,7 @@ This Information Model specifies that instances of class `foaf:Person` must have ### Step 4: Transformation -We now have source data (Step 1), and a fair intuition about our target data (Step 2), and an Information Model (Step 3). We can automate the mapping from source to target data with an [Assertion](/triply-etl/assert): +We now have source data (Step 1), and a fair intuition about our target data (Step 2), and an Information Model (Step 3). We can automate the mapping from source to target data with an [Assertion](../../assert): ```ts etl.use( @@ -172,18 +172,21 @@ shp:Person_age ``` Notice the following details: + - We enforce a Closed World Semantics (CWA) in our Information Models with the `sh:closed` property. If a property is not explicitly specified in our Information Model, it is not allowed to be used with instance data. + - We create IRIs in the dedicated `shp:` namespace for nodes in the Information Model. + - Elements in our Information Model are always in a one-to-one correspondence with elements in our Knowledge Model: - - Node shapes such as `shp:Person` relate to a specific class such as `foaf:Person`. - - Property shapes such as `shp:Person_age` relate to a specific property such as `foaf:age`. + - Node shapes such as `shp:Person` relate to a specific class such as `foaf:Person`. + - Property shapes such as `shp:Person_age` relate to a specific property such as `foaf:age`. ### Step 6: Use the `validate()` function TriplyETL has a dedicated function that can be used to automatically enforce Information Models such as the one expressed in Step 5. -Since the Information Model is relatively small, it can be specified in-line using the [string source type](/triply-etl/extract/types/#strings). Larger models will probably be stored in a separate file or in a TriplyDB graph or asset. +Since the Information Model is relatively small, it can be specified in-line using the [string source type](../../extract/types/#strings). Larger models will probably be stored in a separate file or in a TriplyDB graph or asset. ```ts validate(Source.string(` @@ -287,7 +290,7 @@ In this case, changing the data in the source system seem the most logical. Aft #### Option 2: Change the transformation and/or assertions -Alternatively, it is possible to transform English words that denote numbers to their corresponding numeric values. Since people can get up to one hundred years old, or even older, there are many words that we must consider and transform. This can be done with the [`translateAll()` transformation](/triply-etl/transform/ratt#function-translateall): +Alternatively, it is possible to transform English words that denote numbers to their corresponding numeric values. Since people can get up to one hundred years old, or even older, there are many words that we must consider and transform. This can be done with the [`translateAll()` transformation](../../transform/ratt#function-translateall): ```ts translateAll({ diff --git a/docs/triplydb-js/account/index.md b/docs/triplydb-js/account/index.md index 94640676..1c50c6ed 100644 --- a/docs/triplydb-js/account/index.md +++ b/docs/triplydb-js/account/index.md @@ -2,11 +2,11 @@ # Account -Instances of the `Account` class denote TriplyDB accounts. Accounts can be either organizations ([`Organization`](/triplydb-js/organization#organization)) or users ([`User`](/triplydb-js/user#user)). +Instances of the `Account` class denote TriplyDB accounts. Accounts can be either organizations ([`Organization`](../organization#organization)) or users ([`User`](../user#user)). Account objects are obtained by calling the following method: -- [`App.getAccount(name?: string)`](/triplydb-js/app#appgetaccountname-string) +- [`App.getAccount(name?: string)`](../app#appgetaccountname-string) ## Account.addDataset(name: string, metadata?: object) @@ -97,7 +97,7 @@ const dataset = await account.addDataset('iris', { ### See also -This method returns a dataset object. See the [Dataset](/triplydb-js/dataset#dataset) section for an overview of the methods that can be called on such objects. +This method returns a dataset object. See the [Dataset](../dataset#dataset) section for an overview of the methods that can be called on such objects. ## Account.addQuery(name: string, metadata: object) @@ -116,10 +116,10 @@ Adds a new SPARQL query to the account.
The SPARQL query string (e.g., 'select * { ?s ?p ?o }').
dataset: Dataset
-
An instance of class Dataset that the current API token gives access to.
+
An instance of class Dataset that the current API token gives access to.
or
service: Service
-
An instance of class Service that the current API token gives access to and that you want to be associated with this query. The Service given will be used as a preferred service for this query.
+
An instance of class Service that the current API token gives access to and that you want to be associated with this query. The Service given will be used as a preferred service for this query.
**Optional:** @@ -149,31 +149,31 @@ The `metadata` argument specifies the required Dataset or Service and access lev
The visualization plugin that is used to display the result set of the query. If none is set it defaults to 'table'.
'boolean'
-
The boolean view is a special view for ask queries. The value is either 'true' or 'false', and is visualized as `X`(False) or `V`(True).
+
The boolean view is a special view for ask queries. The value is either 'true' or 'false', and is visualized as `X`(False) or `V`(True).
'gallery'
-
The gallery view allows SPARQL results to be displayed in an HTML gallery.
+
The gallery view allows SPARQL results to be displayed in an HTML gallery.
'gchart'
-
The gchart renders geographical, temporal and numerical data in interactive charts such as bar-, line- and pie charts.
+
The gchart renders geographical, temporal and numerical data in interactive charts such as bar-, line- and pie charts.
'geo'
-
The geo allows SPARQL results that contain GeoSPARQL semantics to be automatically interpreted and displayed on a 2D map.
+
The geo allows SPARQL results that contain GeoSPARQL semantics to be automatically interpreted and displayed on a 2D map.
'geoEvents'
-
The geoEvents plugin renders geographical events as a story map.
+
The geoEvents plugin renders geographical events as a story map.
'geo3d'
-
The geo3d allows SPARQL results that contain GeoSPARQL semantics to be automatically interpreted and displayed on a 3D globe.
+
The geo3d allows SPARQL results that contain GeoSPARQL semantics to be automatically interpreted and displayed on a 3D globe.
'markup'
-
The markup can be used to render a variety of markup languages. This requires the use of the `?markup` variable to identify which variable to render.
+
The markup can be used to render a variety of markup languages. This requires the use of the `?markup` variable to identify which variable to render.
'network'
-
The network renders SPARQL Construct results in a graph representation. The maximum amount of results that can be visualized is 1.000 due to performance.
+
The network renders SPARQL Construct results in a graph representation. The maximum amount of results that can be visualized is 1.000 due to performance.
'pivot'
-
The pivot view renders SPARQL results in an interactive pivot table where you are able to aggregate the results by dragging your binding variables to columns or rows.
+
The pivot view renders SPARQL results in an interactive pivot table where you are able to aggregate the results by dragging your binding variables to columns or rows.
'response'
-
The response view shows the body of the response and offers a easy way to download the result as a file.
+
The response view shows the body of the response and offers a easy way to download the result as a file.
'table'
-
The table view allows SPARQL results to be displayed in a table. Each +
The table view allows SPARQL results to be displayed in a table. Each column in the table corresponds to a variable that belongs to the outer projection.
'timeline'
-
The timeline timeline renders the SPARQL results on a Timeline.
+
The timeline timeline renders the SPARQL results on a Timeline.
@@ -278,7 +278,7 @@ A story element is an object with the following keys:
The Markdown content of a story paragraph. Only allowed when the type is set to 'paragraph'
query: Query
-
An instance of class Query.
+
An instance of class Query.
queryVersion: number
The version that is used of the specified query.
@@ -312,7 +312,7 @@ const newStory = await user.addStory('name-of-story', { Casts the TriplyDB account object to its corresponding organization object. -Class [Organization](/triplydb-js/organization#organization) is a specialization of class [Account](#account). +Class [Organization](../organization#organization) is a specialization of class [Account](#account). Calling this method on an `Organization` object does nothing. @@ -327,7 +327,7 @@ const organization = account.asOrganization() ### Alternatives -This method is not needed if the organization is directly retrieved with the specialization method [`App.getOrganization(name: string)`](/triplydb-js/app#appgetorganizationname-string). +This method is not needed if the organization is directly retrieved with the specialization method [`App.getOrganization(name: string)`](../app#appgetorganizationname-string). The following snippet returns the same result as the above example, but in a more direct way: @@ -337,16 +337,16 @@ const organization = await triply.getOrganization('Triply') ### See also -This method returns an organization object. See class [Organization](/triplydb-js/organization#organization) for an overview of the methods that can be called on such objects. +This method returns an organization object. See class [Organization](../organization#organization) for an overview of the methods that can be called on such objects. ## Account.asUser() Casts the TriplyDB account object to its corresponding user object. -Class [User](/triplydb-js/user#user) is a specialization of class [Account](#account). +Class [User](../user#user) is a specialization of class [Account](#account). -Calling this method on a [User](/triplydb-js/user#user) object does nothing. +Calling this method on a [User](../user#user) object does nothing. ### Examples @@ -359,7 +359,7 @@ const user = account.asUser() ### Alternatives -This method is not needed if the user is directly retrieved with the specialization method [`App.getUser(name?: string)`](/triplydb-js/app#appgetusername-string). +This method is not needed if the user is directly retrieved with the specialization method [`App.getUser(name?: string)`](../app#appgetusername-string). The following snippet returns the same result as the above example, but in a more direct way: @@ -369,7 +369,7 @@ const user = await triply.getUser() ### See also -This method returns an organization object. See class [Organization](/triplydb-js/organization#organization) for an overview of the methods that can be called on such objects. +This method returns an organization object. See class [Organization](../organization#organization) for an overview of the methods that can be called on such objects. ## Account.ensureDataset(name: string, metadata?: object) @@ -415,12 +415,12 @@ console.log((await dataset.getInfo()).name) ### See also -This method returns a dataset object. See class [Dataset](/triplydb-js/dataset#dataset) for an overview of the methods that can be called on such objects. +This method returns a dataset object. See class [Dataset](../dataset#dataset) for an overview of the methods that can be called on such objects. ## Account.getDatasets() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over the accessible datasets for the current account. +Returns an [async iterator](../faq#what-is-an-async-iterator) over the accessible datasets for the current account. ### Access restrictions @@ -565,9 +565,9 @@ for await (const item of account.getPinnedItems()) { This method returns various types of objects. Each class has different functionalities: -- See class [Dataset](/triplydb-js/dataset#dataset) for an overview of the methods for dataset objects. -- See class [Query](/triplydb-js/query#query) for an overview of the methods for query objects. -- See class [Story](/triplydb-js/story#story) for an overview of the methods for story objects. +- See class [Dataset](../dataset#dataset) for an overview of the methods for dataset objects. +- See class [Query](../query#query) for an overview of the methods for query objects. +- See class [Story](../story#story) for an overview of the methods for story objects. ## Account.getQuery(name: string) @@ -586,12 +586,12 @@ console.log((await query.getInfo()).requestConfig?.payload.query) ### See also -See class [Query](/triplydb-js/query#query) for an overview of the methods for query objects. +See class [Query](../query#query) for an overview of the methods for query objects. ## Account.getQueries() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over the accessible queries that belong to the account. +Returns an [async iterator](../faq#what-is-an-async-iterator) over the accessible queries that belong to the account. ### Access restrictions @@ -614,7 +614,7 @@ for await (const query of account.getQueries()) { ### See also -See class [Query](/triplydb-js/query#query) for an overview of the methods for query objects. +See class [Query](../query#query) for an overview of the methods for query objects. ## Account.ensureStory(name: string, metadata: object) @@ -716,7 +716,7 @@ const story = await account.getStory('the-iris-dataset') ### See also -See class [Story](/triplydb-js/story#story) for an overview of the methods for story objects. +See class [Story](../story#story) for an overview of the methods for story objects. ## Account.getStories() @@ -736,7 +736,7 @@ for await (const story of account.getStories()) { ### See also -See class [Story](/triplydb-js/story#story) for an overview of the methods for story objects. +See class [Story](../story#story) for an overview of the methods for story objects. ## Account.pinItems(items: array[Dataset|Story|Query]) diff --git a/docs/triplydb-js/app/index.md b/docs/triplydb-js/app/index.md index 7c50dc93..e5335cb0 100644 --- a/docs/triplydb-js/app/index.md +++ b/docs/triplydb-js/app/index.md @@ -22,7 +22,7 @@ Notice that the URL must point to the API of the TriplyDB server that the `App` When an API token is specified, the operations that can be performed through the `App` object are determined by: -1. The access level of the token: either “Read access”, “Write acces”, or “Management access”. +1. The access level of the token: either “Read access”, “Write access”, or “Management access”. 2. The credentials of the user account for which the API token is created. When a user is a member of an organization, she has access to all its datasets, stories, and queries; a user always has access to her own datasets, stores and queries. The following token access levels are available: @@ -37,9 +37,9 @@ The following token access levels are available: - Read operations over data with access level “Private” that belongs to organizations to which the user who created the token is a member. -2\. “Write acces” allows: +2\. “Write access” allows: - - All operations allows by “Read acces”. + - All operations allows by “Read access”. - Write operations over data that has access setting “Internal”. @@ -79,9 +79,9 @@ const account = await triply.getAccount() ### See also -This method returns an account object. See class [Account](/triplydb-js/account#account) for an overview of the methods that can be called on such objects. +This method returns an account object. See class [Account](../account#account) for an overview of the methods that can be called on such objects. -Class [Account](/triplydb-js/account#account) has two specializations: class [Organization](/triplydb-js/organization#organization) and class [User](/triplydb-js/user/#user). In line with these class specializations, there are also two method specializations: +Class [Account](../account#account) has two specializations: class [Organization](../organization#organization) and class [User](../user/#user). In line with these class specializations, there are also two method specializations: - Method [`App.getOrganization(name: string)`](#appgetorganizationname-string) returns an organization object. @@ -90,7 +90,7 @@ Class [Account](/triplydb-js/account#account) has two specializations: class [Or ## App.getAccounts() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over all accounts in the TriplyDB server. +Returns an [async iterator](../faq#what-is-an-async-iterator) over all accounts in the TriplyDB server. ### Example @@ -109,7 +109,7 @@ for await (const account of triply.getAccounts()) { console.log(await triply.getAccounts().toArray()) ``` -See class [Account](/triplydb-js/account#account) for an overview of the methods that can be used with account objects. +See class [Account](../account#account) for an overview of the methods that can be used with account objects. ## App.getInfo() @@ -141,7 +141,7 @@ This method is similar to [`App.getAccount(name?: string)`](#appgetaccountname-s - This method only works for accounts that represent TriplyDB organizations. -- This method returns an organization object. Class [Organization](/triplydb-js/organization#organization) is a specialization of class [Account](/triplydb-js/account#account). +- This method returns an organization object. Class [Organization](../organization#organization) is a specialization of class [Account](../account#account). ### Examples @@ -151,7 +151,7 @@ The following snippet returns the organization called `'Triply'`: const organization = await triply.getOrganization('Triply') ``` -See class [Organization](/triplydb-js/organization#organization) for an overview of the methods that can be used with organization objects. +See class [Organization](../organization#organization) for an overview of the methods that can be used with organization objects. ### Alternatives @@ -159,7 +159,7 @@ This method is a shorthand for calling the following two methods: - Call method [`App.getAccount(name?: string)`](#appgetaccountname-string) to retrieve an account object. -- Then call method [`Account.asOrganization()`](/triplydb-js/account#accountasorganization) to cast the account object into an organization object. +- Then call method [`Account.asOrganization()`](../account#accountasorganization) to cast the account object into an organization object. The following snippet returns the same result as the previous example, but uses two methods instead of one: @@ -170,7 +170,7 @@ const organization = account.asOrganization() ### See also -This method returns an organization object. See class [Organization](/triplydb-js/organization#organization) for an overview of the methods that can be called on such objects. +This method returns an organization object. See class [Organization](../organization#organization) for an overview of the methods that can be called on such objects. ## App.getUser(name?: string) @@ -199,7 +199,7 @@ This method is a shorthand for the following two methods: 1. Call method [`App.getAccount()`](#appgetaccountname-string) to retrieve an account object. -2. Then call method [`Account.asUser()`](/triplydb-js/account#accountasuser) to cast the account object into a user object. +2. Then call method [`Account.asUser()`](../account#accountasuser) to cast the account object into a user object. The following snippet returns the same result as the previous examples, but uses two methods instead of one: @@ -210,7 +210,7 @@ const user = account.asUser() ### See also -This method returns a user object. See class [User](/triplydb-js/user#user) for an overview of the methods that can be called on such objects. +This method returns a user object. See class [User](../user#user) for an overview of the methods that can be called on such objects. ## App.isCompatibleWith(minimumVersion: string) diff --git a/docs/triplydb-js/asset/index.md b/docs/triplydb-js/asset/index.md index dfd0d493..c50023a4 100644 --- a/docs/triplydb-js/asset/index.md +++ b/docs/triplydb-js/asset/index.md @@ -4,11 +4,13 @@ Not all data can be stored as RDF data. For example images and video files use a binary format. Such files can also be stored in TriplyDB as Assets and can be integrated into the Knowledge Graph. Each asset has a specific identifier that can be used in the Knowledge Graph. -An asset is always uploaded per dataset, for which the function `uploadAsset()` is used. see [Dataset.uploadAsset()](/triplydb-js/dataset#datasetuploadassetassetname-string-filepath-string) for uploading an asset. +An asset is always uploaded per dataset, for which the function `uploadAsset()` is used. see [Dataset.uploadAsset()](../dataset#datasetuploadassetassetname-string-filepath-string) for uploading an asset. If the asset already has been created following functions can retrieve it from the dataset. -- [Dataset.getAsset(assetName: string, versionNumber?: number)](/triplydb-js/dataset#datasetgetassetname-string-version-number) -- [Dataset.getAssets()](/triplydb-js/dataset#datasetgetassets) + +- [Dataset.getAsset(assetName: string, versionNumber?: number)](../dataset#datasetgetassetname-string-version-number) + +- [Dataset.getAssets()](../dataset#datasetgetassets) TriplyDB.js supports several functions to manipulate an asset on TriplyDB. diff --git a/docs/triplydb-js/dataset/index.md b/docs/triplydb-js/dataset/index.md index 841888a8..56cd24e1 100644 --- a/docs/triplydb-js/dataset/index.md +++ b/docs/triplydb-js/dataset/index.md @@ -36,7 +36,7 @@ This method is useful in practice, because it removes the burden on the programm The changes made as a result of calling this method depend on the current state of the connected-to TriplyDB server: - If this dataset does not yet have a service with the given `name`, then the behavior is identical to calling [`Dataset.addService(name: string, metadata?: object)`](#datasetaddservicename-string-metadata-object) with the same arguments. -- If this dataset already has a service with the given `name`, but with different `metadata` specified for it, then the behavior is identical to calling [`Account.getDataset(name: string)`](/triplydb-js/account#accountgetdatasetname-string) and [`Dataset.update(metadata: object)`](#datasetupdatemetadata-object). +- If this dataset already has a service with the given `name`, but with different `metadata` specified for it, then the behavior is identical to calling [`Account.getDataset(name: string)`](../account#accountgetdatasetname-string) and [`Dataset.update(metadata: object)`](#datasetupdatemetadata-object). - If this dataset already has a service with the given `name` and with the same `metadata`, then this method returns that service. ### Required @@ -137,7 +137,7 @@ const reasoning = await dataset.addService('reasoning', { ### See also -See class [Service](/triplydb-js/service#service) for an overview of the methods that can be used with service objects. +See class [Service](../service#service) for an overview of the methods that can be used with service objects. ## Dataset.clear(...resourceType: string) @@ -261,7 +261,7 @@ const asset = await dataset.getAsset('file.png', 1) ## Dataset.getAssets() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over the assets that belong to this dataset. +Returns an [async iterator](../faq#what-is-an-async-iterator) over the assets that belong to this dataset. Assets are binary files that are stored together with data graphs. Common examples include documents, images and videos. @@ -307,7 +307,7 @@ const graph = dataset.getGraph('https://example.com/cats') ## Dataset.getGraphs() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over graphs that belong to this dataset. +Returns an [async iterator](../faq#what-is-an-async-iterator) over graphs that belong to this dataset. ### Examples @@ -373,9 +373,9 @@ const service = dataset.getService('acceptance') ## Dataset.getServices() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over TriplyDB services under a dataset. +Returns an [async iterator](../faq#what-is-an-async-iterator) over TriplyDB services under a dataset. -See class [Service](/triplydb-js/service#service) for an overview of the methods for service objects. +See class [Service](../service#service) for an overview of the methods for service objects. ### Examples @@ -400,7 +400,7 @@ console.log(await dataset.getServices().toArray()) ## Dataset.getStatements({subject?: string, predicate?: string, object?: string, graph?: string}) -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) with statements (quadruples) that fit the specified pattern. +Returns an [async iterator](../faq#what-is-an-async-iterator) with statements (quadruples) that fit the specified pattern. ### Arguments diff --git a/docs/triplydb-js/faq/index.md b/docs/triplydb-js/faq/index.md index bd5080d5..1a3e42e6 100644 --- a/docs/triplydb-js/faq/index.md +++ b/docs/triplydb-js/faq/index.md @@ -62,7 +62,7 @@ const query = await account.getQuery('name-of-some-query') const triply = App.get({ token: process.env.TOKEN }) ``` -3\. Do not forget that we perform TriplyDB.js requests within an [async context](#create-your-first-script). That is: +3\. Do not forget that we perform TriplyDB.js requests within an async context. That is: ```ts async function run() { @@ -142,7 +142,7 @@ const array = await results.toArray() TriplyDB.js makes use of async iterators for retrieving lists of objects. Async iterators are a method of fetching and iterating through large lists, without having to first fetch the whole set. -An example of an async iterator in TriplyDB.js is [`App.getAccounts()`](/triplydb-js/app#appgetaccounts). The following code illustrates how it can be used. +An example of an async iterator in TriplyDB.js is [`App.getAccounts()`](../app#appgetaccounts). The following code illustrates how it can be used. ```ts for await (const account of triply.getAccounts()) { @@ -158,13 +158,13 @@ const accounts = await triply.getAccounts().toArray() TriplyDB.js returns async iterators from the following methods: -- [`App.getAccounts()`](/triplydb-js/app#appgetaccounts) -- [`Account.getDatasets()`](/triplydb-js/account#accountgetdatasets) -- [`Account.getQueries()`](/triplydb-js/account#accountgetqueries) -- [`Account.getStories()`](/triplydb-js/account#accountgetstories) -- [`Dataset.getServices()`](/triplydb-js/dataset#datasetgetservices) -- [`Dataset.getAssets()`](/triplydb-js/dataset#datasetgetassets) -- [`Dataset.getGraphs()`](/triplydb-js/dataset#datasetgetgraphs) -- [`Dataset.getStatements()`](/triplydb-js/dataset#datasetgetstatementssubject-string-predicate-string-object-string-graph-string) -- [`Query.results().statements()`](/triplydb-js/query#queryresultsapivariables-object-options-object) for SPARQL `construct` and `describe` queries -- [`Query.results().bindings()`](/triplydb-js/query/#queryresultsapivariables-object-options-object) for SPARQL `select` queries +- [`App.getAccounts()`](../app#appgetaccounts) +- [`Account.getDatasets()`](../account#accountgetdatasets) +- [`Account.getQueries()`](../account#accountgetqueries) +- [`Account.getStories()`](../account#accountgetstories) +- [`Dataset.getServices()`](../dataset#datasetgetservices) +- [`Dataset.getAssets()`](../dataset#datasetgetassets) +- [`Dataset.getGraphs()`](../dataset#datasetgetgraphs) +- [`Dataset.getStatements()`](../dataset#datasetgetstatementssubject-string-predicate-string-object-string-graph-string) +- [`Query.results().statements()`](../query#queryresultsapivariables-object-options-object) for SPARQL `construct` and `describe` queries +- [`Query.results().bindings()`](../query/#queryresultsapivariables-object-options-object) for SPARQL `select` queries diff --git a/docs/triplydb-js/index.md b/docs/triplydb-js/index.md index b414f5b7..6a1bd4ab 100644 --- a/docs/triplydb-js/index.md +++ b/docs/triplydb-js/index.md @@ -5,7 +5,7 @@ path: '/docs/triplydb-js' [TOC] -**TriplyDB.js** is the official programming library for interacting with [TriplyDB](/triply-db-getting-started). TriplyDB.js allows you to automate operations that would otherwise be performed in the TriplyDB GUI. +**TriplyDB.js** is the official programming library for interacting with [TriplyDB](../../triply-db-getting-started). TriplyDB.js allows you to automate operations that would otherwise be performed in the TriplyDB GUI. TriplyDB.js is implemented in [TypeScript](https://www.typescriptlang.org). TypeScript is a type-safe language that transpiles to [JavaScript](https://en.wikipedia.org/wiki/JavaScript). This allows you to use TriplyDB.js in web browsers as well as on servers (using [Node.js](https://nodejs.org)). TriplyDB.js is open source and its source code is published on [GitHub](https://github.com/TriplyDB/TriplyDB-JS). @@ -17,7 +17,7 @@ Please contact [support@triply.cc](mailto:support@triply.cc) for questions and s TriplyDB.js contains several classes, each with their own methods. The documentation for every method includes at least one code example. These code examples can be run by inserting them into the following overall script. -Notice that `process.env.TOKEN` picks up an API token that is stored in the environment variable called `TOKEN`. Follow the steps on [this page](/triply-api/#creating-an-api-token) to create a new API token in the TriplyDB GUI. +Notice that `process.env.TOKEN` picks up an API token that is stored in the environment variable called `TOKEN`. Follow the steps on [this page](../../triply-api/#creating-an-api-token) to create a new API token in the TriplyDB GUI. ```ts require('source-map-support/register') diff --git a/docs/triplydb-js/organization/index.md b/docs/triplydb-js/organization/index.md index 24de6144..3d03613c 100644 --- a/docs/triplydb-js/organization/index.md +++ b/docs/triplydb-js/organization/index.md @@ -6,13 +6,13 @@ Instances of class `Organization` denote organizations in TriplyDB. ### Obtaining instances -Organizations are obtained with method [`App.getOrganization(name: string)`](/triplydb-js/app#appgetorganizationname-string): +Organizations are obtained with method [`App.getOrganization(name: string)`](../app#appgetorganizationname-string): ```ts const organization = await triply.getOrganization('Triply') ``` -Alternatively, organizations are obtained by first obtaining an account ([`App.getAccount(name?: string)`](/triplydb-js/app#appgetaccountname-string)) and then casting it to an organization ([`Account.asOrganization()`](/triplydb-js/account#accountasorganization)): +Alternatively, organizations are obtained by first obtaining an account ([`App.getAccount(name?: string)`](../app#appgetaccountname-string)) and then casting it to an organization ([`Account.asOrganization()`](../account#accountasorganization)): ```ts const account = await triply.getAccount('Triply') @@ -21,14 +21,14 @@ const organization = account.asOrganization() ### Inheritance -`Organization` is a subclass of [`Account`](/triplydb-js/account#account), from which it inherits most of its methods. +`Organization` is a subclass of [`Account`](../account#account), from which it inherits most of its methods. ## Organization.addDataset(name: string, metadata?: object) Adds a new TriplyDB dataset with the given `name` to the current organization. -Inherited from [`Account.addDataset(name: string, metadata?: object)`](/triplydb-js/account#accountadddatasetname-string-metadata-object). +Inherited from [`Account.addDataset(name: string, metadata?: object)`](../account#accountadddatasetname-string-metadata-object). ## Organization.addMember(user: User, role?: Role) @@ -69,21 +69,21 @@ Removes a member from the given `Organization`. Adds a new TriplyDB query to the current organization. -Inherited from [`Account.addQuery(name: string, metadata: object)`](/triplydb-js/account#accountaddqueryname-string-metadata-object). +Inherited from [`Account.addQuery(name: string, metadata: object)`](../account#accountaddqueryname-string-metadata-object). ## Organization.ensureStory(name: string, metadata: object) Ensures the existence of a story with the given `name` and with the specified `metadata`. -Inherited from [`Account.ensureStory(name: string, metadata: object)`](/triplydb-js/account#accountensurestoryname-string-metadata-object). +Inherited from [`Account.ensureStory(name: string, metadata: object)`](../account#accountensurestoryname-string-metadata-object). ## Organization.addStory(name: string, metadata?: object) Adds a new TriplyDB story with the given `name` to the current organization. -Inherited from [`Account.addStory(name: string, metadata?: object)`](/triplydb-js/account#accountaddstoryname-string-metadata-object). +Inherited from [`Account.addStory(name: string, metadata?: object)`](../account#accountaddstoryname-string-metadata-object). ## Organization.delete() @@ -104,21 +104,21 @@ await organization.delete() Ensures the existence of a dataset with the given `name` and with the specified `metadata`. -Inherited from [`Account.ensureDataset(name: string, metadata?: object)`](/triplydb-js/account#accountensuredatasetname-string-metadata-object). +Inherited from [`Account.ensureDataset(name: string, metadata?: object)`](../account#accountensuredatasetname-string-metadata-object). ## Organization.getDataset(name: string) Returns the dataset with the given `name` that is published by this organization. -Inherited from [`Account.getDataset(name: string)`](/triplydb-js/account#accountgetdatasetname-string). +Inherited from [`Account.getDataset(name: string)`](../account#accountgetdatasetname-string). ## Organization.getDatasets() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over the accessible datasets that belong to this organization. +Returns an [async iterator](../faq/#what-is-an-async-iterator) over the accessible datasets that belong to this organization. -Inherited from [`Account.getDatasets()`](/triplydb-js/account#accountgetdatasets). +Inherited from [`Account.getDatasets()`](../account#accountgetdatasets). ## Organization.getMembers() @@ -154,14 +154,14 @@ for (const membership of await org.getMembers()) { ### See also -Memberships of organization are TriplyDB [users](/triplydb-js/user#user). +Memberships of organization are TriplyDB [users](../user#user). ## Organization.getPinnedItems() Returns the list of datasets, stories and queries that are pinned for the current organization. -Inherited from [`Account.getPinnedItems()`](/triplydb-js/account#accountgetpinneditems). +Inherited from [`Account.getPinnedItems()`](../account#accountgetpinneditems). ## Organization.removeMember(user: User) @@ -170,7 +170,7 @@ Removes the specified `user` from this organization. ### Arguments -The `user` argument has to be a [`User`](/triplydb-js/user#user) object of a user. +The `user` argument has to be a [`User`](../user#user) object of a user. ### Existence considerations @@ -186,7 +186,7 @@ const johnDoe = await app.getUser('john-doe') await organization.removeMember(johnDoe) ``` -- The following snippet removes John Doe from the Triply organization, using a [`User`](/triplydb-js/user#user) object: +- The following snippet removes John Doe from the Triply organization, using a [`User`](../user#user) object: ```ts const organization = await triply.getOrganization('Triply') @@ -199,11 +199,11 @@ await organization.removeMember(user) Sets a new image that characterized this organization. -Inherited from [`Account.setAvatar(file: string)`](/triplydb-js/account#accountsetavatarfile-string). +Inherited from [`Account.setAvatar(file: string)`](../account#accountsetavatarfile-string). ## Organization.update(metadata: object) Updates the metadata for this account. -Inherited from [`Account.update(metadata: object)`](/triplydb-js/account#accountupdatemetadata-object). +Inherited from [`Account.update(metadata: object)`](../account#accountupdatemetadata-object). diff --git a/docs/triplydb-js/query/index.md b/docs/triplydb-js/query/index.md index b0402cd8..b5a35313 100644 --- a/docs/triplydb-js/query/index.md +++ b/docs/triplydb-js/query/index.md @@ -104,7 +104,7 @@ At least one of the following arguments is required to create a new version. Any
IRI variable
An object of the form `Variable` - (see Account.addQuery()) + (see Account.addQuery())
@@ -123,7 +123,7 @@ It currently does not support the use of variables. `Query.results()` function will automatically return all the results from a saved query. You can retrieve both results from a `select` or `ask` query and a `construct` or `describe` query. The results are returned as an `async iterator`. -If there are more than 10 000 query results, they could be retrieved using [pagination with TriplyDB.js](/triply-db-getting-started/saved-queries#pagination-with-triplydbjs). +If there are more than 10 000 query results, they could be retrieved using [pagination with TriplyDB.js](../../triply-db-getting-started/saved-queries#pagination-with-triplydbjs). ### Examples diff --git a/docs/triplydb-js/service/index.md b/docs/triplydb-js/service/index.md index adc56be6..0eff43f5 100644 --- a/docs/triplydb-js/service/index.md +++ b/docs/triplydb-js/service/index.md @@ -6,8 +6,8 @@ Service objects describe specific functionalities that can be created over datas Service objects are obtained through the the following methods: -- [`Dataset.addService`](/triplydb-js/dataset#datasetaddservicename-string-metadata-object) -- [`Dataset.getServices`](/triplydb-js/dataset#datasetgetservices) +- [`Dataset.addService`](../dataset#datasetaddservicename-string-metadata-object) +- [`Dataset.getServices`](../dataset#datasetgetservices) A service always has one of the following statuses: @@ -113,7 +113,7 @@ await Promise.all(dataset.getServices().map(service => service.update())) ## Service.waitUntilRunning() -A service can be stopped or updated. The use of asynchronous code means that when a start command is given it takes a while before the service is ready for use. To make sure a service is available for querying you can uesr the function `waitUntilRunning()` to make sure that the script will wait until the service is ready for use. +A service can be stopped or updated. The use of asynchronous code means that when a start command is given it takes a while before the service is ready for use. To make sure a service is available for querying you can user the function `waitUntilRunning()` to make sure that the script will wait until the service is ready for use. ### Example diff --git a/docs/triplydb-js/story/index.md b/docs/triplydb-js/story/index.md index 5cda629e..3bafdcef 100644 --- a/docs/triplydb-js/story/index.md +++ b/docs/triplydb-js/story/index.md @@ -7,17 +7,17 @@ A TriplyDB data story is a way of communicating information about your linked da Story objects are obtained through the the following methods: -- [`User.addStory`](/triplydb-js/user#useraddstoryname-string-metadata-object) -- [`User.ensureStory`](/triplydb-js/user#userensurestoryname-string-metadata-object) -- [`User.getStories`](/triplydb-js/user#usergetstories) -- [`User.getStory`](/triplydb-js/user#usergetstoryname-string) +- [`User.addStory`](../user#useraddstoryname-string-metadata-object) +- [`User.ensureStory`](../user#userensurestoryname-string-metadata-object) +- [`User.getStories`](../user#usergetstories) +- [`User.getStory`](../user#usergetstoryname-string) ## Story.delete() Deletes this story. This deletes all paragraphs that belong to this story. -This _does not_ delete the queries that are linked into this story. If you also want to delete the queries, then this must be done with distinct calls of [`Query.delete()`](/triplydb-js/query#querydelete). +This _does not_ delete the queries that are linked into this story. If you also want to delete the queries, then this must be done with distinct calls of [`Query.delete()`](../query#querydelete). ### Examples diff --git a/docs/triplydb-js/user/index.md b/docs/triplydb-js/user/index.md index cdb1e127..d1ba9449 100644 --- a/docs/triplydb-js/user/index.md +++ b/docs/triplydb-js/user/index.md @@ -6,14 +6,14 @@ Instances of class `User` denote users in TriplyDB. ### Obtaining instances -Users are obtained with method [`App.getUser(name?: string)`](/triplydb-js/app#appgetusername-string): +Users are obtained with method [`App.getUser(name?: string)`](../app#appgetusername-string): ```ts const user = triply.getUser('john-doe') const user = triply.getUser() ``` -Alternatively, users are obtained by first obtaining an account ([`App.getAccount(name?: string)`](/triplydb-js/app#appgetaccountname-string)) and then casting it to a use ([`Account.asUser()`](/triplydb-js/account#accountasuser)): +Alternatively, users are obtained by first obtaining an account ([`App.getAccount(name?: string)`](../app#appgetaccountname-string)) and then casting it to a use ([`Account.asUser()`](../account#accountasuser)): ```ts const account = await triply.getAccount('john-doe') @@ -22,39 +22,39 @@ const user = account.asUser() ### Inheritance -`User` is a subclass of [Account](/triplydb-js/account#account), from which it inherits most of its methods. +`User` is a subclass of [Account](../account#account), from which it inherits most of its methods. ### Limitations -Users cannot be created or deleted through the TriplyDB.js library. See the [Triply Console documentation](/triply-db-getting-started) for how to create and delete users through the web-based GUI. +Users cannot be created or deleted through the TriplyDB.js library. See the [Triply Console documentation](../../triply-db-getting-started) for how to create and delete users through the web-based GUI. ## User.addDataset(name: string, metadata?: object) Adds a new TriplyDB dataset with the given `name` to the current account. -Inherited from [`Account.addDataset(name: string, metadata?: object)`](/triplydb-js/account#accountadddatasetname-string-metadata-object). +Inherited from [`Account.addDataset(name: string, metadata?: object)`](../account#accountadddatasetname-string-metadata-object). ## User.addQuery(metadata: object) Adds a new TriplyDB query to the current user. -Inherited from [`Account.addQuery(name:string, metadata: object)`](/triplydb-js/account#accountaddqueryname-string-metadata-object). +Inherited from [`Account.addQuery(name:string, metadata: object)`](../account#accountaddqueryname-string-metadata-object). ## User.ensureStory(name: string, metadata: object) Ensures the existence of a story with the given `name` and with the specified `metadata`. -Inherited from [`Account.ensureStory(name: string, metadata: object)`](/triplydb-js/account#accountensurestoryname-string-metadata-object). +Inherited from [`Account.ensureStory(name: string, metadata: object)`](../account#accountensurestoryname-string-metadata-object). ## User.addStory(name: string, metadata?: object) Adds a new TriplyDB story with the given `name` to the current user. -Inherited from [`Account.addStory(name: string, metadata?: object)`](/triplydb-js/account#accountaddstoryname-string-metadata-object). +Inherited from [`Account.addStory(name: string, metadata?: object)`](../account#accountaddstoryname-string-metadata-object). ## User.createOrganization(name: string, metadata?: object) @@ -96,21 +96,21 @@ await user.createOrganization(my-organization, {name: 'My Organization'})) Ensures the existence of a dataset with the given `name` and with the specified `metadata`. -Inherited from [`Account.ensureDataset(name: string, metadata?: object)`](/triplydb-js/account#accountensuredatasetname-string-metadata-object). +Inherited from [`Account.ensureDataset(name: string, metadata?: object)`](../account#accountensuredatasetname-string-metadata-object). ## User.getDataset(name: string) Returns the TriplyDB dataset with the given `name` that is published by this user. -Inherited from [`Account.getDataset(name: string)`](/triplydb-js/account#accountgetdatasetname-string). +Inherited from [`Account.getDataset(name: string)`](../account#accountgetdatasetname-string). ## User.getDatasets() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over the accessible datasets for the current user. +Returns an [async iterator](../faq#what-is-an-async-iterator) over the accessible datasets for the current user. -Inherited from [`Account.getDatasets()`](/triplydb-js/account#accountgetdatasets). +Inherited from [`Account.getDatasets()`](../account#accountgetdatasets). ## User.getInfo() @@ -177,7 +177,7 @@ console.log(await user.getInfo()) ## User.getOrganizations() -Returns an [async iterator](/triplydb-js/faq#what-is-an-async-iterator) over the organizations that this user is a member of. +Returns an [async iterator](../faq#what-is-an-async-iterator) over the organizations that this user is a member of. ### Order considerations @@ -196,25 +196,25 @@ for await (const organization of await user.getOrganizations()) { ### See also -The [async iterator](/triplydb-js/faq#what-is-an-async-iterator) contains organization objects. See the section about the [`Organization`](/triplydb-js/organizaton#organization) class for methods that can be used on such objects. +The [async iterator](../faq#what-is-an-async-iterator) contains organization objects. See the section about the [`Organization`](../organizaton#organization) class for methods that can be used on such objects. ## User.getPinnedItems() Returns the list of datasets, stories and queries that are pinned for the current user. -Inherited from [`Account.getPinnedItems()`](/triplydb-js/account#accountgetpinneditems). +Inherited from [`Account.getPinnedItems()`](../account#accountgetpinneditems). ## User.setAvatar(file: string) Sets a new image that characterized this user. -Inherited from [`Account.setAvatar(file: string)`](/triplydb-js/account#accountsetavatarfile-string). +Inherited from [`Account.setAvatar(file: string)`](../account#accountsetavatarfile-string). ## User.update(metadata: object) Updates the metadata for this user. -Inherited from [`Account.update(metadata: object)`](/triplydb-js/account#accountupdatemetadata-object). +Inherited from [`Account.update(metadata: object)`](../account#accountupdatemetadata-object). diff --git a/docs/yasgui/index.md b/docs/yasgui/index.md index d4e51968..0c059f5f 100644 --- a/docs/yasgui/index.md +++ b/docs/yasgui/index.md @@ -106,8 +106,7 @@ one, but allows the entire HTML string to be written at once as a SPARQL Template. Notice that this removes the need for concatenating (`concat/n`), explicit to-string conversion (`str/1`), and also allows the HTML literal to be constructed more easily (no `strdt/2` needed). -You can [try this query -online](https://triplydb.com/academy/-/queries/sparql-templating). +You can [try this query online](https://triplydb.com/academy/-/queries/sparql-templating). ```sparql prefix def: @@ -367,7 +366,7 @@ This view recognizes the following SPARQL variable names: | `?xLabel` | The text or [HTML](#rendering-html) content of the popups that appears when the shape that is bound to `?x` is clicked. | | `?xZ` | The height in meters at which the 2.5D shape that is based on the 2D shape that is bound to `?x` starts. This variable is not needed if data is stored in native 3D. | -### Geo Events ([TriplyDB Plugin](/yasgui-api#triplyDbPlugins)) +### Geo Events ([TriplyDB Plugin](../yasgui-api#triplyDbPlugins)) The SPARQL Geo Events plugin renders geographical events as a story map ([example](https://api.triplydb.com/s/USQ5oNpL)). This view recognizes the following SPARQL variable names: