Skip to content

Commit

Permalink
FIX absolute links
Browse files Browse the repository at this point in the history
  • Loading branch information
anakonrad committed Oct 18, 2023
1 parent b65760e commit ece246b
Show file tree
Hide file tree
Showing 42 changed files with 304 additions and 291 deletions.
2 changes: 1 addition & 1 deletion docs/generics/api-token.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ path: "/docs/api-token"

# API Token

Applications (see [TriplyDB.js](/triplydb-js)) and pipelines (see [TriplyETL](/triply-etl)) often require access rights to interact with TriplyDB instances. Specifically, reading non-public data and writing any (public or non-public) data requires setting an API token. The token ensures that only users that are specifically authorized for certain datasets are able to access and/or modify those datasets.
Applications (see [TriplyDB.js](../../triplydb-js)) and pipelines (see [TriplyETL](../../triply-etl)) often require access rights to interact with TriplyDB instances. Specifically, reading non-public data and writing any (public or non-public) data requires setting an API token. The token ensures that only users that are specifically authorized for certain datasets are able to access and/or modify those datasets.

The following steps must be performed in order to create an API token:

Expand Down
10 changes: 5 additions & 5 deletions docs/triply-api/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,14 +101,14 @@ curl -H 'Authorization: Bearer TOKEN' -H 'Content-Type: application/json' -X POS
Upper-case letter words in json after `-d` must be replaced by the following values:

- `NAME` :: The name of the dataset in the url.
- `ACCESS_LEVEL` :: *public*, *private* or *internal*. For more information visit [Access levels in TriplyDB](/triply-db-getting-started/reference/#access-levels).
- `ACCESS_LEVEL` :: *public*, *private* or *internal*. For more information visit [Access levels in TriplyDB](../triply-db-getting-started/reference/#access-levels).
- `DISPLAY_NAME` :: The display name of the dataset.


### Upload data to a dataset

You can upload a data file via the Triply API. You need to [use the API Token](#using-the-api-token) and send an HTTP POST request with data specifying the local file path.
The list of supported file extentions can be checked in [Adding data: File upload](/triply-db-getting-started/uploading-data/#adding-data-file-upload) documentation.
The list of supported file extensions can be checked in [Adding data: File upload](../triply-db-getting-started/uploading-data/#adding-data-file-upload) documentation.

The example of the URI:
```sh
Expand Down Expand Up @@ -190,7 +190,7 @@ The above example includes the following GRLC annotations:

## LD Browser API

Triply APIs provide a convenient way to access data used by [LD Browser](/triply-db-getting-started/viewing-data/#linked-data-browser), which offers a comprehensive overview of a specific IRI. By using Triply API for a specific IRI, you can retrieve the associated 'document' in the `.nt` format that describes the IRI.
Triply APIs provide a convenient way to access data used by [LD Browser](../triply-db-getting-started/viewing-data/#linked-data-browser), which offers a comprehensive overview of a specific IRI. By using Triply API for a specific IRI, you can retrieve the associated 'document' in the `.nt` format that describes the IRI.

To make an API request for a specific instance, you can use the following URI path:

Expand All @@ -203,7 +203,7 @@ To illustrate this, let's take the example of the DBpedia dataset and the [speci
```none
https://api.triplydb.com/datasets/DBpedia-association/dbpedia/describe.nt?resource=http%3A%2F%2Fdbpedia.org%2Fresource%2FMona_Lisa
```
in your browser, the `.nt` document describing the 'Mona Lisa' instance will be automatically downloaded. You can then upload this file to a dataset and [visualize it in a graph](/yasgui/#network-triplydb-plugin). Figure 1 illustrates the retrieved graph for the ‘Mona Lisa’ instance.
in your browser, the `.nt` document describing the 'Mona Lisa' instance will be automatically downloaded. You can then upload this file to a dataset and [visualize it in a graph](../yasgui/#network-triplydb-plugin). Figure 1 illustrates the retrieved graph for the ‘Mona Lisa’ instance.

![Figure 1](../assets/MonaLisaGraph.png)

Expand Down Expand Up @@ -381,7 +381,7 @@ Everybody who has access to the dataset also has access to its services, includi
- For *Internal* datasets, only users that are logged into the triple store can issue queries.
- For *Private* datasets, only users that are logged into the triple store and are members of `ACCOUNT` can issue queries.
Notice that for professional use it is easier and better to use [saved queries](/triply-db-getting-started/saved-queries/#saved-queries). Saved queries have persistent URIs, descriptive metadata, versioning, and support for reliable large-scale pagination ([see how to use pagination with saved query API](/triply-db-getting-started/saved-queries/#pagination-with-the-saved-query-api)). Still, if you do not have a saved query at your disposal and want to perform a custom SPARQL request against an accessible endpoint, you can do so. TriplyDB implements the SPARQL 1.1 Query Protocol standard for this purpose.
Notice that for professional use it is easier and better to use [saved queries](../triply-db-getting-started/saved-queries/#saved-queries). Saved queries have persistent URIs, descriptive metadata, versioning, and support for reliable large-scale pagination ([see how to use pagination with saved query API](../triply-db-getting-started/saved-queries/#pagination-with-the-saved-query-api)). Still, if you do not have a saved query at your disposal and want to perform a custom SPARQL request against an accessible endpoint, you can do so. TriplyDB implements the SPARQL 1.1 Query Protocol standard for this purpose.
### Sending a SPARQL Query request
Expand Down
2 changes: 1 addition & 1 deletion docs/triply-db-getting-started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ path: "/docs/triply-db-getting-started"
TriplyDB allows you to store, publish, and use linked data Knowledge
Graphs. TriplyDB makes it easy to upload linked data and expose it
through various APIs (SPARQL, Elasticsearch, LDF, REST). [Read
More](/triply-api)
More](../triply-api)
32 changes: 16 additions & 16 deletions docs/triply-db-getting-started/saved-queries/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ removing the hassle of figuring out how to run a SPARQL query.
There are two ways to create a saved query.
_You need to be logged in and have authorization rights on the dataset to use this feature_

1. When working from the [SPARQL IDE](/triply-db-getting-started/viewing-data#sparql-ide)
1. When working from the [SPARQL IDE](../viewing-data#sparql-ide)
2. Using the Saved Queries tab in a dataset

Creating a saved query with the SPARQL IDE is done by writing a query/visualization and hitting the save button ![The save query button highlighted](../../assets/save-query-highlighted.png)
Expand All @@ -31,7 +31,7 @@ If you want to delete a saved query, you can do so by clicking the three dots on

### Sharing a saved query

To share a saved query, for example in [Data Stories](/triply-db-getting-started/data-stories#data-stories), you can copy the link that is
To share a saved query, for example in [Data Stories](../data-stories#data-stories), you can copy the link that is
used when you open the query in TriplyDB. Let's say you have a query called
`Timelined-Cars-BETA` in the dataset `core` under the account `dbpedia` and you
want to use version 9. Then the following link would be used
Expand All @@ -50,7 +50,7 @@ https://triplydb.com/DBpedia-association/-/queries/timeline-cars

The result of a query can be downloaded via the TriplyDB interface. After saving the query, open it in TriplyDB. e.g. <https://triplydb.com/DBpedia-association/-/queries/timeline-cars/>.

You can download results in different data format, depending on which [visualization option](/yasgui/#visualizations) you use.
You can download results in different data format, depending on which [visualization option](../../yasgui/#visualizations) you use.
For example, if you want to download the results in a `.json` format, you can choose the option `Response` and click on the download icon or scroll down and click on `Download result`.

![Download the query result via the download icon.](../../assets/queryResult.png)
Expand All @@ -62,16 +62,16 @@ Below is a table of all supported visualizations and what format of results they

| **Visualization option** | **Result data format** |
| ------------------------------------------------------------ | ---------------------- |
| [Table](/yasgui/#table) | `.csv` |
| [Response](/yasgui/#response) | `.json` |
| [Gallery](/yasgui/#gallery-triplydb-plugin) | Download not supported |
| [Chart](/yasgui/#chart-triplydb-plugin) | `.svg` |
| [Geo](/yasgui/#geo-triplydb-plugin) | Download not supported |
| [Geo-3D](/yasgui/#geo-3d-triplydb-only) | Download not supported |
| [Geo events](/yasgui/#geo-events-triplydb-plugin)| Download not supported |
| [Markup](/yasgui/#markup-triplydb-plugin) | `.svg`, `.html` |
| [Network](/yasgui/#network-triplydb-plugin) | `.png` |
| [Timeline](/yasgui/#timeline-triplydb-plugin) | Download not supported |
| [Table](../../yasgui/#table) | `.csv` |
| [Response](../../yasgui/#response) | `.json` |
| [Gallery](../../yasgui/#gallery-triplydb-plugin) | Download not supported |
| [Chart](../../yasgui/#chart-triplydb-plugin) | `.svg` |
| [Geo](../../yasgui/#geo-triplydb-plugin) | Download not supported |
| [Geo-3D](../../yasgui/#geo-3d-triplydb-only) | Download not supported |
| [Geo events](../../yasgui/#geo-events-triplydb-plugin)| Download not supported |
| [Markup](../../yasgui/#markup-triplydb-plugin) | `.svg`, `.html` |
| [Network](../../yasgui/#network-triplydb-plugin) | `.png` |
| [Timeline](../../yasgui/#timeline-triplydb-plugin) | Download not supported |



Expand Down Expand Up @@ -105,7 +105,7 @@ link:

#### Pagination with TriplyDB.js

**TriplyDB.js** is the official programming library for interacting with [TriplyDB](/triply-db-getting-started/). TriplyDB.js allows the user to connect to a TriplyDB instance via the TypeScript language. TriplyDB.js has the advantage that it can handle pagination internally so it can reliably retrieve a large number of results.
**TriplyDB.js** is the official programming library for interacting with [TriplyDB](../). TriplyDB.js allows the user to connect to a TriplyDB instance via the TypeScript language. TriplyDB.js has the advantage that it can handle pagination internally so it can reliably retrieve a large number of results.

To get the output for a `construct` or `select` query, follow these steps:

Expand Down Expand Up @@ -193,7 +193,7 @@ SPARQL queries as a RESTful API, also means you can transport your data to your

Clicking the '</>' button opens the code snippet screen. Here you select the snippet in the language you want to have, either Python or R. You can then copy the snippet, by clicking the 'copy to clipboard' button or selecting the snippet and pressing `ctrl-c`. Now you can paste the code in the location you want to use the data. The data is stored in the `data` variable in `JSON` format.

When the SPARQL query is not public, but instead either private or internal, you will need to add an authorization header to the get request. Without the authorization header the request will return an incorrect response. Checkout [Creating your API token](/triply-api/#creating-an-api-token) about creating your API-token for the authorization header.
When the SPARQL query is not public, but instead either private or internal, you will need to add an authorization header to the get request. Without the authorization header the request will return an incorrect response. Checkout [Creating your API token](../../triply-api/#creating-an-api-token) about creating your API-token for the authorization header.

Check out the [SPARQL pagination page](#download-more-than-10-000-query-results-sparql-pagination) when you want to query a SPARQL query that holds more than 10.000 results. The [SPARQL pagination page ](#download-more-than-10-000-query-results-sparql-pagination) will explain how you can retrieve the complete set.

Expand All @@ -213,4 +213,4 @@ Users can specify additional metadata inside the query string, by using the GRLC
#+ frequency: hourly
```

See the [Triply API documentation](/triply-api#queries) for how to retrieve query metadata, including how to retrieve GRLC annotations.
See the [Triply API documentation](../../triply-api#queries) for how to retrieve query metadata, including how to retrieve GRLC annotations.
8 changes: 4 additions & 4 deletions docs/triply-db-getting-started/uploading-data/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,18 +22,18 @@ The following steps allow a new linked datasets to be created:

5. Optionally, enter a dataset description. You can use rich text
formatting by using Markdown. See [our section about
Markdown](/triply-db-getting-started/reference#markdown-support) for details.
Markdown](../reference#markdown-support) for details.

6. Optionally, change the access level for the dataset. By default
this is set to “Private”. See [dataset access
levels](/triply-db-getting-started/reference#access-levels) for more information.
levels](../reference#access-levels) for more information.


![The “Add dataset” dialog](../../assets/createdataset.png)

When datasets are Public (see [Access Levels](/triply-db-getting-started/reference#access-levels)), they
When datasets are Public (see [Access Levels](../reference#access-levels)), they
automatically expose metadata and are automatically crawled and
indexed by popular search engines (see [Metadata](/triply-db-getting-started/publishing-data#entering-metadata)).
indexed by popular search engines (see [Metadata](../publishing-data#entering-metadata)).

## Adding data

Expand Down
6 changes: 3 additions & 3 deletions docs/triply-db-getting-started/viewing-data/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ Open Source [Yasgui](../yasgui) query editor.

It is often useful to save a SPARQL query for later use. This is
achieved by clicking on the save icon in the top-right corner of the
SPARQL Editor. Doing so will create a [Save Query](/triply-db-getting-started/saved-queries#saved-queries).
SPARQL Editor. Doing so will create a [Save Query](../saved-queries#saved-queries).

### Sharing a SPARQL query

Expand All @@ -148,15 +148,15 @@ from which the SPARQL query can be copied in the following three forms:
visualizations. Long URLs are not supported by some application
that cut off a URL after a maximum length (often 1,024
characters). Use one of the other two options or use [Saved
Queries](/triply-db-getting-started/saved-queries#saved-queries) to avoid such restrictions.
Queries](../saved-queries#saved-queries) to avoid such restrictions.

2. A short URL that redirects to the full URL-encoded SPARQL query.

3. A cURL command that can be copy/pasted into a terminal
application that supports this command. cURL is often used by
programmers to test HTTP(S) requests.

[Saved Queries](/triply-db-getting-started/saved-queries#saved-queries) are a more modern way of sharing SPARQL queries.
[Saved Queries](../saved-queries#saved-queries) are a more modern way of sharing SPARQL queries.
They do not have any of the technical limitations that occur with
URL-encoded queries.

Expand Down
10 changes: 5 additions & 5 deletions docs/triply-etl/assert/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ Assertion are statements of fact. In linked data, assertions are commonly calle
TriplyETL supports the following assertion approaches:

- 3A. **RATT** (RDF All The Things) contains a core set of TypeScript functions for making linked data assertions:
- [RATT Term Assertions](/triply-etl/assert/ratt/term): functions that are used to assert terms (IRIs or literals).
- [RATT Statement Assertions](/triply-etl/assert/ratt/statement): functions that are used to assert statements (triples or quads).
- [RATT Term Assertions](ratt/term): functions that are used to assert terms (IRIs or literals).
- [RATT Statement Assertions](ratt/statement): functions that are used to assert statements (triples or quads).
<!--
- 3B. [**JSON-LD**](/triply-etl/assert/json-ld) can be used to assert data according to a JSON-LD Context.
-->
Expand All @@ -37,6 +37,6 @@ TriplyETL supports the following assertion approaches:

After linked data has been asserted into the Internal Store, the following steps can be performend:

- [4. **Enrich**](/triply-etl/enrich/) improves or extends linked data in the Internal Store.
- [5. **Validate**](/triply-etl/validate) ensures that linked data in the Internal Store is correct.
- [6. **Publish**](/triply-etl/publish) makes linked data available in a Triple Store for others to use.
- [4. **Enrich**](../enrich/) improves or extends linked data in the Internal Store.
- [5. **Validate**](../validate) ensures that linked data in the Internal Store is correct.
- [6. **Publish**](../publish) makes linked data available in a Triple Store for others to use.
Loading

0 comments on commit ece246b

Please sign in to comment.