Skip to content

Commit

Permalink
Add a documentation site (#120)
Browse files Browse the repository at this point in the history
  • Loading branch information
brantburnett authored Dec 22, 2024
1 parent f4484a2 commit 9067cab
Show file tree
Hide file tree
Showing 14 changed files with 381 additions and 99 deletions.
56 changes: 56 additions & 0 deletions .github/workflows/pages.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
name: GitHub Pages

on:
push:
branches:
- main

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: pages
cancel-in-progress: false

jobs:
build-docs:
name: Build Documentation

runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: 9.0.x

- name: Install docfX
run: dotnet tool update -g docfx
- name: Build documentation
run: docfx docfx.json

- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: 'artifacts/_site'

publish-docs:
name: Publish Documentation
needs: build-docs

# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
actions: read
pages: write
id-token: write

# Deploy to the github-pages environment
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}

runs-on: ubuntu-latest
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,4 @@ BenchmarkDotNet.Artifacts/
test-results/
TestResults/
.DS_Store
/api/
2 changes: 1 addition & 1 deletion COPYING.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright 2011-2020, Google, Inc. and Snappier Authors
Copyright 2011-2024, Google, Inc. and Snappier Authors.
All rights reserved.

Redistribution and use in source and binary forms, with or without
Expand Down
99 changes: 2 additions & 97 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@

Snappier is a pure C# port of Google's [Snappy](https://github.com/google/snappy) compression algorithm. It is designed with speed as the primary goal, rather than compression ratio, and is ideal for compressing network traffic. Please see [the Snappy README file](https://github.com/google/snappy/blob/master/README.md) for more details on Snappy.

Complete documentation is available at (https://brantburnett.github.io/Snappier/).

## Project Goals

The Snappier project aims to meet the following needs of the .NET community.
Expand Down Expand Up @@ -31,44 +33,6 @@ or
dotnet add package Snappier
```

## Block compression/decompression using a buffer you already own

```cs
using Snappier;

public class Program
{
private static byte[] Data = {0, 1, 2}; // Wherever you get the data from
public static void Main()
{
// This option assumes that you are managing buffers yourself in an efficient way.
// In this example, we're using heap allocated byte arrays, however in most cases
// you would get these buffers from a buffer pool like ArrayPool<byte> or MemoryPool<byte>.
// If the output buffer is too small, an ArgumentException is thrown. This will not
// occur in this example because a sufficient buffer is always allocated via
// Snappy.GetMaxCompressedLength or Snappy.GetUncompressedLength. There are TryCompress
// and TryDecompress overloads that return false if the output buffer is too small
// rather than throwing an exception.
// Compression
byte[] buffer = new byte[Snappy.GetMaxCompressedLength(Data)];
int compressedLength = Snappy.Compress(Data, buffer);
Span<byte> compressed = buffer.AsSpan(0, compressedLength);

// Decompression
byte[] outputBuffer = new byte[Snappy.GetUncompressedLength(compressed)];
int decompressedLength = Snappy.Decompress(compressed, outputBuffer);

for (var i = 0; i < decompressedLength; i++)
{
// Do something with the data
}
}
}
```

## Block compression/decompression using a memory pool buffer

```cs
Expand Down Expand Up @@ -97,65 +61,6 @@ public class Program
}
```

## Block compression/decompression using a buffer writter

```cs
using Snappier;
using System.Buffers;

public class Program
{
private static byte[] Data = {0, 1, 2}; // Wherever you get the data from
public static void Main()
{
// This option uses `IBufferWriter<byte>`. In .NET 6 you can get a simple
// implementation such as `ArrayBufferWriter<byte>` but it may also be a `PipeWriter<byte>`
// or any other more advanced implementation of `IBufferWriter<byte>`.
// These overloads also accept a `ReadOnlySequence<byte>` which allows the source data
// to be made up of buffer segments rather than one large buffer. However, segment size
// may be a factor in performance. For compression, segments that are some multiple of
// 64KB are recommended. For decompression, simply avoid small segments.
// Compression
var compressedBufferWriter = new ArrayBufferWriter<byte>();
Snappy.Compress(new ReadOnlySequence<byte>(Data), compressedBufferWriter);
var compressedData = compressedBufferWriter.WrittenMemory;

// Decompression
var decompressedBufferWriter = new ArrayBufferWriter<byte>();
Snappy.Decompress(new ReadOnlySequence<byte>(compressedData), decompressedBufferWriter);
var decompressedData = decompressedBufferWriter.WrittenMemory;

// Do something with the data
}
}
```

## Block compression/decompression using heap allocated byte[]

```cs
using Snappier;

public class Program
{
private static byte[] Data = {0, 1, 2}; // Wherever you get the data from
public static void Main()
{
// This is generally the least efficient option,
// but in some cases may be the simplest to implement.
// Compression
byte[] compressed = Snappy.CompressToArray(Data);

// Decompression
byte[] decompressed = Snappy.DecompressToArray(compressed);
}
}
```

## Stream compression/decompression

Compressing or decompressing a stream follows the same paradigm as other compression streams in .NET. `SnappyStream` wraps an inner stream. If decompressing you read from the `SnappyStream`, if compressing you write to the `SnappyStream`
Expand Down
3 changes: 2 additions & 1 deletion Snappier/Snappier.csproj
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
<PackageLicenseExpression>BSD-3-Clause</PackageLicenseExpression>
<PackageReadmeFile>README.md</PackageReadmeFile>
<PackageIcon>icon.png</PackageIcon>
<PackageProjectUrl>https://brantburnett.github.io/Snappier/</PackageProjectUrl>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
<PublishRepositoryUrl>true</PublishRepositoryUrl>
<EmbedUntrackedSources>true</EmbedUntrackedSources>
Expand All @@ -44,7 +45,7 @@

<ItemGroup>
<None Include="..\README.md" Pack="true" PackagePath="$(PackageReadmeFile)" />
<None Include="..\icon.png" Pack="true" PackagePath="$(PackageIcon)" />
<None Include="..\images\icon.png" Pack="true" PackagePath="$(PackageIcon)" />
</ItemGroup>

<ItemGroup Condition=" '$(TargetFramework)' == 'netstandard2.0' ">
Expand Down
54 changes: 54 additions & 0 deletions docfx.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
{
"$schema": "https://raw.githubusercontent.com/dotnet/docfx/main/schemas/docfx.schema.json",
"metadata": [
{
"src": [
{
"src": "./Snappier",
"files": [
"**/*.csproj"
]
}
],
"dest": "api",
"properties": {
"TargetFramework": "net8.0"
}
}
],
"build": {
"content": [
{
"files": [
"**/*.{md,yml}"
],
"exclude": [
"_site/**",
"artifacts/**",
"**/BenchmarkDotNet.Artifacts/**"
]
}
],
"resource": [
{
"files": [
"images/**"
]
}
],
"output": "artifacts/_site",
"template": [
"default",
"material/material"
],
"globalMetadata": {
"_appName": "Snappier",
"_appTitle": "Snappier",
"_appLogoPath": "images/icon-48.png",
"_disableContribution": true,
"_enableSearch": true,
"pdf": false
},
"postProcessors": ["ExtractSearchIndex"]
}
}
129 changes: 129 additions & 0 deletions docs/block.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Block Compression

Block compression is ideal for data up to 64KB, though it may be used for data of any size. It does not include any stream
framing or CRC validation. It also doesn't automatically revert to uncompressed data in the event of data size growth.

## Block compression/decompression using a buffer you already own

```cs
using Snappier;

public class Program
{
private static byte[] Data = {0, 1, 2}; // Wherever you get the data from
public static void Main()
{
// This option assumes that you are managing buffers yourself in an efficient way.
// In this example, we're using heap allocated byte arrays, however in most cases
// you would get these buffers from a buffer pool like ArrayPool<byte> or MemoryPool<byte>.
// If the output buffer is too small, an ArgumentException is thrown. This will not
// occur in this example because a sufficient buffer is always allocated via
// Snappy.GetMaxCompressedLength or Snappy.GetUncompressedLength. There are TryCompress
// and TryDecompress overloads that return false if the output buffer is too small
// rather than throwing an exception.
// Compression
byte[] buffer = new byte[Snappy.GetMaxCompressedLength(Data)];
int compressedLength = Snappy.Compress(Data, buffer);
Span<byte> compressed = buffer.AsSpan(0, compressedLength);

// Decompression
byte[] outputBuffer = new byte[Snappy.GetUncompressedLength(compressed)];
int decompressedLength = Snappy.Decompress(compressed, outputBuffer);

for (var i = 0; i < decompressedLength; i++)
{
// Do something with the data
}
}
}
```

## Block compression/decompression using a memory pool buffer

```cs
using Snappier;

public class Program
{
private static byte[] Data = {0, 1, 2}; // Wherever you get the data from
public static void Main()
{
// This option uses `MemoryPool<byte>.Shared`. However, if you fail to
// dispose of the returned buffers correctly it can result in inefficient garbage collection.
// It is important to either call .Dispose() or use a using statement.
// Compression
using (IMemoryOwner<byte> compressed = Snappy.CompressToMemory(Data))
{
// Decompression
using (IMemoryOwner<byte> decompressed = Snappy.DecompressToMemory(compressed.Memory.Span))
{
// Do something with the data
}
}
}
}
```

## Block compression/decompression using a buffer writter

```cs
using Snappier;
using System.Buffers;

public class Program
{
private static byte[] Data = {0, 1, 2}; // Wherever you get the data from
public static void Main()
{
// This option uses `IBufferWriter<byte>`. In .NET 6 you can get a simple
// implementation such as `ArrayBufferWriter<byte>` but it may also be a `PipeWriter<byte>`
// or any other more advanced implementation of `IBufferWriter<byte>`.
// These overloads also accept a `ReadOnlySequence<byte>` which allows the source data
// to be made up of buffer segments rather than one large buffer. However, segment size
// may be a factor in performance. For compression, segments that are some multiple of
// 64KB are recommended. For decompression, simply avoid small segments.
// Compression
var compressedBufferWriter = new ArrayBufferWriter<byte>();
Snappy.Compress(new ReadOnlySequence<byte>(Data), compressedBufferWriter);
var compressedData = compressedBufferWriter.WrittenMemory;

// Decompression
var decompressedBufferWriter = new ArrayBufferWriter<byte>();
Snappy.Decompress(new ReadOnlySequence<byte>(compressedData), decompressedBufferWriter);
var decompressedData = decompressedBufferWriter.WrittenMemory;

// Do something with the data
}
}
```

## Block compression/decompression using heap allocated byte[]

```cs
using Snappier;

public class Program
{
private static byte[] Data = {0, 1, 2}; // Wherever you get the data from
public static void Main()
{
// This is generally the least efficient option,
// but in some cases may be the simplest to implement.
// Compression
byte[] compressed = Snappy.CompressToArray(Data);

// Decompression
byte[] decompressed = Snappy.DecompressToArray(compressed);
}
}
```
Loading

0 comments on commit 9067cab

Please sign in to comment.