diff --git a/CONTRIBUTION.md b/CONTRIBUTION.md index 279faff3c5..69e7dab4ad 100644 --- a/CONTRIBUTION.md +++ b/CONTRIBUTION.md @@ -243,7 +243,7 @@ Here are a few highlights 1. Don't use the STL containers, iostreams, or the built-in C++ RTTI system. 1. Don't use the C++ variants of C headers (e.g., use `` instead of ``). 1. Don't use exceptions for non-fatal errors (and even then support a build flag to opt out of exceptions). -1. Types should use UpperCamelCase, values should use lowerCamelCase, and macros should use SCREAMING_SNAKE_CASE with a prefix `SLANG_`. +1. Types should use UpperCamelCase, values should use lowerCamelCase, and macros should use `SCREAMING_SNAKE_CASE` with a prefix `SLANG_`. 1. Global variables should have a `g` prefix, non-const static class members can have an `s` prefix, constant data (in the sense of static const) should have a `k` prefix, and an `m_` prefix on member variables and a `_` prefix on member functions are allowed. 1. Prefixes based on types (e.g., p for pointers) should never be used. 1. In function parameter lists, an `in`, `out`, or `io` prefix can be added to a parameter name to indicate whether a pointer/reference/buffer is intended to be used for input, output, or both input and output. diff --git a/docs/64bit-type-support.md b/docs/64bit-type-support.md index 506e054935..acff2f7707 100644 --- a/docs/64bit-type-support.md +++ b/docs/64bit-type-support.md @@ -20,9 +20,9 @@ Overview The Slang language supports 64 bit built in types. Such as -* double -* uint64_t -* int64_t +* `double` +* `uint64_t` +* `int64_t` This also applies to vector and matrix versions of these types. @@ -125,8 +125,8 @@ D3D12 | FXC/DXBC | No | No | 2 2) uint64_t support requires https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/hlsl-shader-model-6-0-features-for-direct3d-12, so DXBC is not a target. -The intrinsics available on uint64_t type are `abs`, `min`, `max`, `clamp` and `countbits`. -The intrinsics available on uint64_t type are `abs`, `min`, `max` and `clamp`. +The intrinsics available on `uint64_t` type are `abs`, `min`, `max`, `clamp` and `countbits`. +The intrinsics available on `uint64_t` type are `abs`, `min`, `max` and `clamp`. GLSL ==== diff --git a/docs/cpu-target.md b/docs/cpu-target.md index 8b9afabdd7..1229cb3dd7 100644 --- a/docs/cpu-target.md +++ b/docs/cpu-target.md @@ -52,9 +52,9 @@ SLANG_HOST_CPP_SOURCE, ///< C++ code for `host` style Using the `-target` command line option -* C_SOURCE: c -* CPP_SOURCE: cpp,c++,cxx -* HOST_CPP_SOURCE: host-cpp,host-c++,host-cxx +* `C_SOURCE`: c +* `CPP_SOURCE`: cpp,c++,cxx +* `HOST_CPP_SOURCE`: host-cpp,host-c++,host-cxx Note! Output of C source is not currently supported. @@ -70,11 +70,11 @@ SLANG_OBJECT_CODE, ///< Object code that can be used for later link Using the `-target` command line option -* EXECUTABLE: exe, executable -* SHADER_SHARED_LIBRARY: sharedlib, sharedlibrary, dll -* SHADER_HOST_CALLABLE: callable, host-callable -* OBJECT_CODE: object-conde -* HOST_HOST_CALLABLE: host-host-callable +* `EXECUTABLE`: exe, executable +* `SHADER_SHARED_LIBRARY`: sharedlib, sharedlibrary, dll +* `SHADER_HOST_CALLABLE`: callable, host-callable +* `OBJECT_CODE`: object-conde +* `HOST_HOST_CALLABLE`: host-host-callable Using `host-callable` types from the the command line, other than to test such code compile and can be loaded for host execution. diff --git a/docs/cuda-target.md b/docs/cuda-target.md index c59703259b..6c59690daa 100644 --- a/docs/cuda-target.md +++ b/docs/cuda-target.md @@ -256,7 +256,7 @@ If this fails - the prelude include of `cuda_fp16.h` will most likely fail on NV CUDA has the `__half` and `__half2` types defined in `cuda_fp16.h`. The `__half2` can produce results just as quickly as doing the same operation on `__half` - in essence for some operations `__half2` is [SIMD](https://en.wikipedia.org/wiki/SIMD) like. The half implementation in Slang tries to take advantage of this optimization. -Since Slang supports up to 4 wide vectors Slang has to build on CUDAs half support. The types _`_half3` and `__half4` are implemented in `slang-cuda-prelude.h` for this reason. It is worth noting that `__half3` is made up of a `__half2` and a `__half`. As `__half2` is 4 byte aligned, this means `__half3` is actually 8 bytes, rather than 6 bytes that might be expected. +Since Slang supports up to 4 wide vectors Slang has to build on CUDAs half support. The types `__half3` and `__half4` are implemented in `slang-cuda-prelude.h` for this reason. It is worth noting that `__half3` is made up of a `__half2` and a `__half`. As `__half2` is 4 byte aligned, this means `__half3` is actually 8 bytes, rather than 6 bytes that might be expected. One area where this optimization isn't fully used is in comparisons - as in effect Slang treats all the vector/matrix half comparisons as if they are scalar. This could be perhaps be improved on in the future. Doing so would require using features that are not directly available in the CUDA headers. @@ -265,7 +265,7 @@ Wave Intrinsics There is broad support for [HLSL Wave intrinsics](https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/hlsl-shader-model-6-0-features-for-direct3d-12), including support for [SM 6.5 intrinsics](https://microsoft.github.io/DirectX-Specs/d3d/HLSL_ShaderModel6_5.html). -Most Wave intrinsics will work with vector, matrix or scalar types of typical built in types - uint, int, float, double, uint64_t, int64_t. +Most Wave intrinsics will work with vector, matrix or scalar types of typical built in types - `uint`, `int`, `float`, `double`, `uint64_t`, `int64_t`. The support is provided via both the Slang core module as well as the Slang CUDA prelude found in 'prelude/slang-cuda-prelude.h'. Many Wave intrinsics are not directly applicable within CUDA which supplies a more low level mechanisms. The implementation of most Wave functions work most optimally if a 'Wave' where all lanes are used. If all lanes from index 0 to pow2(n) -1 are used (which is also true if all lanes are used) a binary reduction is typically applied. If this is not the case the implementation fallsback on a slow path which is linear in the number of active lanes, and so is typically significantly less performant. diff --git a/docs/design/experimental.md b/docs/design/experimental.md index 28f6ef9cde..38707ab1c7 100644 --- a/docs/design/experimental.md +++ b/docs/design/experimental.md @@ -31,7 +31,7 @@ Adding Experimental Interfaces When the above recommendations cannot be followed, as with features that are expected to be iterated on or are regarded as temporary, there are additional recommendations. -Interfaces that are expected to change must be marked "_Experimental" in their class name and in their UUID name. +Interfaces that are expected to change must be marked `_Experimental` in their class name and in their UUID name. For example, diff --git a/docs/gfx-user-guide/unsupported-formats.md b/docs/gfx-user-guide/unsupported-formats.md index 54fd1a0f16..f93567a950 100644 --- a/docs/gfx-user-guide/unsupported-formats.md +++ b/docs/gfx-user-guide/unsupported-formats.md @@ -1,263 +1,266 @@ Unsupported Formats ====================== -GFX currently does not support the following listed D3D and Vulkan formats. With the exception of D24_UNORM_S8_UINT, these formats have been omitted as their counterpart API does not have a corresponding format. D24_UNORM_S8_UINT has been omitted as it is only supported by Nvidia. -DXGI_FORMAT_R32G8X24_TYPELESS \ -DXGI_FORMAT_D32_FLOAT_S8X24_UINT \ -DXGI_FORMAT_R32_FLOAT_X8X24_TYPELESS \ -DXGI_FORMAT_X32_TYPELESS_G8X24_UINT \ -DXGI_FORMAT_R24G8_TYPELESS \ -DXGI_FORMAT_D24_UNORM_S8_UINT \ -DXGI_FORMAT_R24_UNORM_X8_TYPELESS \ -DXGI_FORMAT_X24_TYPELESS_G8_UINT \ -DXGI_FORMAT_A8_UNORM \ -DXGI_FORMAT_R1_UNORM \ -DXGI_FORMAT_R8G8_B8G8_UNORM \ -DXGI_FORMAT_G8R8_G8B8_UNORM \ -DXGI_FORMAT_BC1_TYPELESS \ -DXGI_FORMAT_BC2_TYPELESS \ -DXGI_FORMAT_BC3_TYPELESS \ -DXGI_FORMAT_BC4_TYPELESS \ -DXGI_FORMAT_BC5_TYPELESS \ -DXGI_FORMAT_B8G8R8X8_UNORM \ -DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM \ -DXGI_FORMAT_B8G8R8X8_TYPELESS \ -DXGI_FORMAT_B8G8R8X8_UNORM_SRGB \ -DXGI_FORMAT_BC6H_TYPELESS \ -DXGI_FORMAT_BC7_TYPELESS \ -DXGI_FORMAT_AYUV \ -DXGI_FORMAT_Y410 \ -DXGI_FORMAT_Y416 \ -DXGI_FORMAT_NV12 \ -DXGI_FORMAT_P010 \ -DXGI_FORMAT_P016 \ -DXGI_FORMAT_420_OPAQUE \ -DXGI_FORMAT_YUY2 \ -DXGI_FORMAT_Y210 \ -DXGI_FORMAT_Y216 \ -DXGI_FORMAT_NV11 \ -DXGI_FORMAT_AI44 \ -DXGI_FORMAT_IA44 \ -DXGI_FORMAT_P8 \ -DXGI_FORMAT_A8P8 \ -DXGI_FORMAT_P208 \ -DXGI_FORMAT_V208 \ -DXGI_FORMAT_V408 \ -DXGI_FORMAT_SAMPLER_FEEDBACK_MIN_MIP_OPAQUE \ -DXGI_FORMAT_SAMPLER_FEEDBACK_MIP_REGION_USED_OPAQUE \ +GFX currently does not support the following listed D3D and Vulkan formats. +With the exception of `D24_UNORM_S8_UINT`, these formats have been omitted as +their counterpart API does not have a corresponding format. `D24_UNORM_S8_UINT` +has been omitted as it is only supported by Nvidia. -VK_FORMAT_R4G4_UNORM_PACK8 \ -VK_FORMAT_R4G4B4A4_UNORM_PACK16 \ -VK_FORMAT_B4G4R4A4_UNORM_PACK16 \ -VK_FORMAT_B5G6R5_UNORM_PACK16 \ -VK_FORMAT_R5G5B5A1_UNORM_PACK16 \ -VK_FORMAT_B5G5R5A1_UNORM_PACK16 \ -VK_FORMAT_R8_USCALED \ -VK_FORMAT_R8_SSCALED \ -VK_FORMAT_R8_SRGB \ -VK_FORMAT_R8G8_USCALED \ -VK_FORMAT_R8G8_SSCALED \ -VK_FORMAT_R8G8_SRGB \ -VK_FORMAT_R8G8B8_UNORM \ -VK_FORMAT_R8G8B8_SNORM \ -VK_FORMAT_R8G8B8_USCALED \ -VK_FORMAT_R8G8B8_SSCALED \ -VK_FORMAT_R8G8B8_UINT \ -VK_FORMAT_R8G8B8_SINT \ -VK_FORMAT_R8G8B8_SRGB \ -VK_FORMAT_B8G8R8_UNORM \ -VK_FORMAT_B8G8R8_SNORM \ -VK_FORMAT_B8G8R8_USCALED \ -VK_FORMAT_B8G8R8_SSCALED \ -VK_FORMAT_B8G8R8_UINT \ -VK_FORMAT_B8G8R8_SINT \ -VK_FORMAT_B8G8R8_SRGB \ -VK_FORMAT_R8G8B8A8_USCALED \ -VK_FORMAT_R8G8B8A8_SSCALED \ -VK_FORMAT_B8G8R8A8_SNORM \ -VK_FORMAT_B8G8R8A8_USCALED \ -VK_FORMAT_B8G8R8A8_SSCALED \ -VK_FORMAT_B8G8R8A8_UINT \ -VK_FORMAT_B8G8R8A8_SINT \ -VK_FORMAT_A8B8G8R8_UNORM_PACK32 \ -VK_FORMAT_A8B8G8R8_SNORM_PACK32 \ -VK_FORMAT_A8B8G8R8_USCALED_PACK32 \ -VK_FORMAT_A8B8G8R8_SSCALED_PACK32 \ -VK_FORMAT_A8B8G8R8_UINT_PACK32 \ -VK_FORMAT_A8B8G8R8_SINT_PACK32 \ -VK_FORMAT_A8B8G8R8_SRGB_PACK32 \ -VK_FORMAT_A2R10G10B10_UNORM_PACK32 \ -VK_FORMAT_A2R10G10B10_SNORM_PACK32 \ -VK_FORMAT_A2R10G10B10_USCALED_PACK32 \ -VK_FORMAT_A2R10G10B10_SSCALED_PACK32 \ -VK_FORMAT_A2R10G10B10_UINT_PACK32 \ -VK_FORMAT_A2R10G10B10_SINT_PACK32 \ -VK_FORMAT_A2B10G10R10_SNORM_PACK32 \ -VK_FORMAT_A2B10G10R10_USCALED_PACK32 \ -VK_FORMAT_A2B10G10R10_SSCALED_PACK32 \ -VK_FORMAT_A2B10G10R10_SINT_PACK32 \ -VK_FORMAT_R16_USCALED \ -VK_FORMAT_R16_SSCALED \ -VK_FORMAT_R16G16_USCALED \ -VK_FORMAT_R16G16_SSCALED \ -VK_FORMAT_R16G16B16_UNORM \ -VK_FORMAT_R16G16B16_SNORM \ -VK_FORMAT_R16G16B16_USCALED \ -VK_FORMAT_R16G16B16_SSCALED \ -VK_FORMAT_R16G16B16_UINT \ -VK_FORMAT_R16G16B16_SINT \ -VK_FORMAT_R16G16B16_SFLOAT \ -VK_FORMAT_R16G16B16A16_USCALED \ -VK_FORMAT_R16G16B16A16_SSCALED \ -VK_FORMAT_R64_UINT \ -VK_FORMAT_R64_SINT \ -VK_FORMAT_R64_SFLOAT \ -VK_FORMAT_R64G64_UINT \ -VK_FORMAT_R64G64_SINT \ -VK_FORMAT_R64G64_SFLOAT \ -VK_FORMAT_R64G64B64_UINT \ -VK_FORMAT_R64G64B64_SINT \ -VK_FORMAT_R64G64B64_SFLOAT \ -VK_FORMAT_R64G64B64A64_UINT \ -VK_FORMAT_R64G64B64A64_SINT \ -VK_FORMAT_R64G64B64A64_SFLOAT \ -VK_FORMAT_X8_D24_UNORM_PACK32 \ -VK_FORMAT_S8_UINT \ -VK_FORMAT_D16_UNORM_S8_UINT \ -VK_FORMAT_D24_UNORM_S8_UINT \ -VK_FORMAT_D32_SFLOAT_S8_UINT \ -VK_FORMAT_BC1_RGB_UNORM_BLOCK \ -VK_FORMAT_BC1_RGB_SRGB_BLOCK \ -VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK \ -VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK \ -VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK \ -VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK \ -VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK \ -VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK \ -VK_FORMAT_EAC_R11_UNORM_BLOCK \ -VK_FORMAT_EAC_R11_SNORM_BLOCK \ -VK_FORMAT_EAC_R11G11_UNORM_BLOCK \ -VK_FORMAT_EAC_R11G11_SNORM_BLOCK \ -VK_FORMAT_ASTC_4x4_UNORM_BLOCK \ -VK_FORMAT_ASTC_4x4_SRGB_BLOCK \ -VK_FORMAT_ASTC_5x4_UNORM_BLOCK \ -VK_FORMAT_ASTC_5x4_SRGB_BLOCK \ -VK_FORMAT_ASTC_5x5_UNORM_BLOCK \ -VK_FORMAT_ASTC_5x5_SRGB_BLOCK \ -VK_FORMAT_ASTC_6x5_UNORM_BLOCK \ -VK_FORMAT_ASTC_6x5_SRGB_BLOCK \ -VK_FORMAT_ASTC_6x6_UNORM_BLOCK \ -VK_FORMAT_ASTC_6x6_SRGB_BLOCK \ -VK_FORMAT_ASTC_8x5_UNORM_BLOCK \ -VK_FORMAT_ASTC_8x5_SRGB_BLOCK \ -VK_FORMAT_ASTC_8x6_UNORM_BLOCK \ -VK_FORMAT_ASTC_8x6_SRGB_BLOCK \ -VK_FORMAT_ASTC_8x8_UNORM_BLOCK \ -VK_FORMAT_ASTC_8x8_SRGB_BLOCK \ -VK_FORMAT_ASTC_10x5_UNORM_BLOCK \ -VK_FORMAT_ASTC_10x5_SRGB_BLOCK \ -VK_FORMAT_ASTC_10x6_UNORM_BLOCK \ -VK_FORMAT_ASTC_10x6_SRGB_BLOCK \ -VK_FORMAT_ASTC_10x8_UNORM_BLOCK \ -VK_FORMAT_ASTC_10x8_SRGB_BLOCK \ -VK_FORMAT_ASTC_10x10_UNORM_BLOCK \ -VK_FORMAT_ASTC_10x10_SRGB_BLOCK \ -VK_FORMAT_ASTC_12x10_UNORM_BLOCK \ -VK_FORMAT_ASTC_12x10_SRGB_BLOCK \ -VK_FORMAT_ASTC_12x12_UNORM_BLOCK \ -VK_FORMAT_ASTC_12x12_SRGB_BLOCK \ -VK_FORMAT_G8B8G8R8_422_UNORM \ -VK_FORMAT_B8G8R8G8_422_UNORM \ -VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM \ -VK_FORMAT_G8_B8R8_2PLANE_420_UNORM \ -VK_FORMAT_G8_B8_R8_3PLANE_422_UNORM \ -VK_FORMAT_G8_B8R8_2PLANE_422_UNORM \ -VK_FORMAT_G8_B8_R8_3PLANE_444_UNORM \ -VK_FORMAT_R10X6_UNORM_PACK16 \ -VK_FORMAT_R10X6G10X6_UNORM_2PACK16 \ -VK_FORMAT_R10X6G10X6B10X6A10X6_UNORM_4PACK16 \ -VK_FORMAT_G10X6B10X6G10X6R10X6_422_UNORM_4PACK16 \ -VK_FORMAT_B10X6G10X6R10X6G10X6_422_UNORM_4PACK16 \ -VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_420_UNORM_3PACK16 \ -VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16 \ -VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_422_UNORM_3PACK16 \ -VK_FORMAT_G10X6_B10X6R10X6_2PLANE_422_UNORM_3PACK16 \ -VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_444_UNORM_3PACK16 \ -VK_FORMAT_R12X4_UNORM_PACK16 \ -VK_FORMAT_R12X4G12X4_UNORM_2PACK16 \ -VK_FORMAT_R12X4G12X4B12X4A12X4_UNORM_4PACK16 \ -VK_FORMAT_G12X4B12X4G12X4R12X4_422_UNORM_4PACK16 \ -VK_FORMAT_B12X4G12X4R12X4G12X4_422_UNORM_4PACK16 \ -VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_420_UNORM_3PACK16 \ -VK_FORMAT_G12X4_B12X4R12X4_2PLANE_420_UNORM_3PACK16 \ -VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_422_UNORM_3PACK16 \ -VK_FORMAT_G12X4_B12X4R12X4_2PLANE_422_UNORM_3PACK16 \ -VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_444_UNORM_3PACK16 \ -VK_FORMAT_G16B16G16R16_422_UNORM \ -VK_FORMAT_B16G16R16G16_422_UNORM \ -VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM \ -VK_FORMAT_G16_B16R16_2PLANE_420_UNORM \ -VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM \ -VK_FORMAT_G16_B16R16_2PLANE_422_UNORM \ -VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM \ -VK_FORMAT_PVRTC1_2BPP_UNORM_BLOCK_IMG \ -VK_FORMAT_PVRTC1_4BPP_UNORM_BLOCK_IMG \ -VK_FORMAT_PVRTC2_2BPP_UNORM_BLOCK_IMG \ -VK_FORMAT_PVRTC2_4BPP_UNORM_BLOCK_IMG \ -VK_FORMAT_PVRTC1_2BPP_SRGB_BLOCK_IMG \ -VK_FORMAT_PVRTC1_4BPP_SRGB_BLOCK_IMG \ -VK_FORMAT_PVRTC2_2BPP_SRGB_BLOCK_IMG \ -VK_FORMAT_PVRTC2_4BPP_SRGB_BLOCK_IMG \ -VK_FORMAT_ASTC_4x4_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_5x4_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_5x5_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_6x5_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_6x6_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_8x5_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_8x6_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_8x8_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_10x5_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_10x6_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_10x8_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_10x10_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_12x10_SFLOAT_BLOCK_EXT \ -VK_FORMAT_ASTC_12x12_SFLOAT_BLOCK_EXT \ -VK_FORMAT_G8_B8R8_2PLANE_444_UNORM_EXT \ -VK_FORMAT_G10X6_B10X6R10X6_2PLANE_444_UNORM_3PACK16_EXT \ -VK_FORMAT_G12X4_B12X4R12X4_2PLANE_444_UNORM_3PACK16_EXT \ -VK_FORMAT_G16_B16R16_2PLANE_444_UNORM_EXT \ -VK_FORMAT_A4B4G4R4_UNORM_PACK16_EXT \ -VK_FORMAT_G8B8G8R8_422_UNORM_KHR \ -VK_FORMAT_B8G8R8G8_422_UNORM_KHR \ -VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM_KHR \ -VK_FORMAT_G8_B8R8_2PLANE_420_UNORM_KHR \ -VK_FORMAT_G8_B8_R8_3PLANE_422_UNORM_KHR \ -VK_FORMAT_G8_B8R8_2PLANE_422_UNORM_KHR \ -VK_FORMAT_G8_B8_R8_3PLANE_444_UNORM_KHR \ -VK_FORMAT_R10X6_UNORM_PACK16_KHR \ -VK_FORMAT_R10X6G10X6_UNORM_2PACK16_KHR \ -VK_FORMAT_R10X6G10X6B10X6A10X6_UNORM_4PACK16_KHR \ -VK_FORMAT_G10X6B10X6G10X6R10X6_422_UNORM_4PACK16_KHR \ -VK_FORMAT_B10X6G10X6R10X6G10X6_422_UNORM_4PACK16_KHR \ -VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_420_UNORM_3PACK16_KHR \ -VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16_KHR \ -VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_422_UNORM_3PACK16_KHR \ -VK_FORMAT_G10X6_B10X6R10X6_2PLANE_422_UNORM_3PACK16_KHR \ -VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_444_UNORM_3PACK16_KHR \ -VK_FORMAT_R12X4_UNORM_PACK16_KHR \ -VK_FORMAT_R12X4G12X4_UNORM_2PACK16_KHR \ -VK_FORMAT_R12X4G12X4B12X4A12X4_UNORM_4PACK16_KHR \ -VK_FORMAT_G12X4B12X4G12X4R12X4_422_UNORM_4PACK16_KHR \ -VK_FORMAT_B12X4G12X4R12X4G12X4_422_UNORM_4PACK16_KHR \ -VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_420_UNORM_3PACK16_KHR \ -VK_FORMAT_G12X4_B12X4R12X4_2PLANE_420_UNORM_3PACK16_KHR \ -VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_422_UNORM_3PACK16_KHR \ -VK_FORMAT_G12X4_B12X4R12X4_2PLANE_422_UNORM_3PACK16_KHR \ -VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_444_UNORM_3PACK16_KHR \ -VK_FORMAT_G16B16G16R16_422_UNORM_KHR \ -VK_FORMAT_B16G16R16G16_422_UNORM_KHR \ -VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM_KHR \ -VK_FORMAT_G16_B16R16_2PLANE_420_UNORM_KHR \ -VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM_KHR \ -VK_FORMAT_G16_B16R16_2PLANE_422_UNORM_KHR \ -VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM_KHR \ No newline at end of file +- `DXGI_FORMAT_R32G8X24_TYPELESS` +- `DXGI_FORMAT_D32_FLOAT_S8X24_UINT` +- `DXGI_FORMAT_R32_FLOAT_X8X24_TYPELESS` +- `DXGI_FORMAT_X32_TYPELESS_G8X24_UINT` +- `DXGI_FORMAT_R24G8_TYPELESS` +- `DXGI_FORMAT_D24_UNORM_S8_UINT` +- `DXGI_FORMAT_R24_UNORM_X8_TYPELESS` +- `DXGI_FORMAT_X24_TYPELESS_G8_UINT` +- `DXGI_FORMAT_A8_UNORM` +- `DXGI_FORMAT_R1_UNORM` +- `DXGI_FORMAT_R8G8_B8G8_UNORM` +- `DXGI_FORMAT_G8R8_G8B8_UNORM` +- `DXGI_FORMAT_BC1_TYPELESS` +- `DXGI_FORMAT_BC2_TYPELESS` +- `DXGI_FORMAT_BC3_TYPELESS` +- `DXGI_FORMAT_BC4_TYPELESS` +- `DXGI_FORMAT_BC5_TYPELESS` +- `DXGI_FORMAT_B8G8R8X8_UNORM` +- `DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM` +- `DXGI_FORMAT_B8G8R8X8_TYPELESS` +- `DXGI_FORMAT_B8G8R8X8_UNORM_SRGB` +- `DXGI_FORMAT_BC6H_TYPELESS` +- `DXGI_FORMAT_BC7_TYPELESS` +- `DXGI_FORMAT_AYUV` +- `DXGI_FORMAT_Y410` +- `DXGI_FORMAT_Y416` +- `DXGI_FORMAT_NV12` +- `DXGI_FORMAT_P010` +- `DXGI_FORMAT_P016` +- `DXGI_FORMAT_420_OPAQUE` +- `DXGI_FORMAT_YUY2` +- `DXGI_FORMAT_Y210` +- `DXGI_FORMAT_Y216` +- `DXGI_FORMAT_NV11` +- `DXGI_FORMAT_AI44` +- `DXGI_FORMAT_IA44` +- `DXGI_FORMAT_P8` +- `DXGI_FORMAT_A8P8` +- `DXGI_FORMAT_P208` +- `DXGI_FORMAT_V208` +- `DXGI_FORMAT_V408` +- `DXGI_FORMAT_SAMPLER_FEEDBACK_MIN_MIP_OPAQUE` +- `DXGI_FORMAT_SAMPLER_FEEDBACK_MIP_REGION_USED_OPAQUE` +- `VK_FORMAT_R4G4_UNORM_PACK8` +- `VK_FORMAT_R4G4B4A4_UNORM_PACK16` +- `VK_FORMAT_B4G4R4A4_UNORM_PACK16` +- `VK_FORMAT_B5G6R5_UNORM_PACK16` +- `VK_FORMAT_R5G5B5A1_UNORM_PACK16` +- `VK_FORMAT_B5G5R5A1_UNORM_PACK16` +- `VK_FORMAT_R8_USCALED` +- `VK_FORMAT_R8_SSCALED` +- `VK_FORMAT_R8_SRGB` +- `VK_FORMAT_R8G8_USCALED` +- `VK_FORMAT_R8G8_SSCALED` +- `VK_FORMAT_R8G8_SRGB` +- `VK_FORMAT_R8G8B8_UNORM` +- `VK_FORMAT_R8G8B8_SNORM` +- `VK_FORMAT_R8G8B8_USCALED` +- `VK_FORMAT_R8G8B8_SSCALED` +- `VK_FORMAT_R8G8B8_UINT` +- `VK_FORMAT_R8G8B8_SINT` +- `VK_FORMAT_R8G8B8_SRGB` +- `VK_FORMAT_B8G8R8_UNORM` +- `VK_FORMAT_B8G8R8_SNORM` +- `VK_FORMAT_B8G8R8_USCALED` +- `VK_FORMAT_B8G8R8_SSCALED` +- `VK_FORMAT_B8G8R8_UINT` +- `VK_FORMAT_B8G8R8_SINT` +- `VK_FORMAT_B8G8R8_SRGB` +- `VK_FORMAT_R8G8B8A8_USCALED` +- `VK_FORMAT_R8G8B8A8_SSCALED` +- `VK_FORMAT_B8G8R8A8_SNORM` +- `VK_FORMAT_B8G8R8A8_USCALED` +- `VK_FORMAT_B8G8R8A8_SSCALED` +- `VK_FORMAT_B8G8R8A8_UINT` +- `VK_FORMAT_B8G8R8A8_SINT` +- `VK_FORMAT_A8B8G8R8_UNORM_PACK32` +- `VK_FORMAT_A8B8G8R8_SNORM_PACK32` +- `VK_FORMAT_A8B8G8R8_USCALED_PACK32` +- `VK_FORMAT_A8B8G8R8_SSCALED_PACK32` +- `VK_FORMAT_A8B8G8R8_UINT_PACK32` +- `VK_FORMAT_A8B8G8R8_SINT_PACK32` +- `VK_FORMAT_A8B8G8R8_SRGB_PACK32` +- `VK_FORMAT_A2R10G10B10_UNORM_PACK32` +- `VK_FORMAT_A2R10G10B10_SNORM_PACK32` +- `VK_FORMAT_A2R10G10B10_USCALED_PACK32` +- `VK_FORMAT_A2R10G10B10_SSCALED_PACK32` +- `VK_FORMAT_A2R10G10B10_UINT_PACK32` +- `VK_FORMAT_A2R10G10B10_SINT_PACK32` +- `VK_FORMAT_A2B10G10R10_SNORM_PACK32` +- `VK_FORMAT_A2B10G10R10_USCALED_PACK32` +- `VK_FORMAT_A2B10G10R10_SSCALED_PACK32` +- `VK_FORMAT_A2B10G10R10_SINT_PACK32` +- `VK_FORMAT_R16_USCALED` +- `VK_FORMAT_R16_SSCALED` +- `VK_FORMAT_R16G16_USCALED` +- `VK_FORMAT_R16G16_SSCALED` +- `VK_FORMAT_R16G16B16_UNORM` +- `VK_FORMAT_R16G16B16_SNORM` +- `VK_FORMAT_R16G16B16_USCALED` +- `VK_FORMAT_R16G16B16_SSCALED` +- `VK_FORMAT_R16G16B16_UINT` +- `VK_FORMAT_R16G16B16_SINT` +- `VK_FORMAT_R16G16B16_SFLOAT` +- `VK_FORMAT_R16G16B16A16_USCALED` +- `VK_FORMAT_R16G16B16A16_SSCALED` +- `VK_FORMAT_R64_UINT` +- `VK_FORMAT_R64_SINT` +- `VK_FORMAT_R64_SFLOAT` +- `VK_FORMAT_R64G64_UINT` +- `VK_FORMAT_R64G64_SINT` +- `VK_FORMAT_R64G64_SFLOAT` +- `VK_FORMAT_R64G64B64_UINT` +- `VK_FORMAT_R64G64B64_SINT` +- `VK_FORMAT_R64G64B64_SFLOAT` +- `VK_FORMAT_R64G64B64A64_UINT` +- `VK_FORMAT_R64G64B64A64_SINT` +- `VK_FORMAT_R64G64B64A64_SFLOAT` +- `VK_FORMAT_X8_D24_UNORM_PACK32` +- `VK_FORMAT_S8_UINT` +- `VK_FORMAT_D16_UNORM_S8_UINT` +- `VK_FORMAT_D24_UNORM_S8_UINT` +- `VK_FORMAT_D32_SFLOAT_S8_UINT` +- `VK_FORMAT_BC1_RGB_UNORM_BLOCK` +- `VK_FORMAT_BC1_RGB_SRGB_BLOCK` +- `VK_FORMAT_ETC2_R8G8B8_UNORM_BLOCK` +- `VK_FORMAT_ETC2_R8G8B8_SRGB_BLOCK` +- `VK_FORMAT_ETC2_R8G8B8A1_UNORM_BLOCK` +- `VK_FORMAT_ETC2_R8G8B8A1_SRGB_BLOCK` +- `VK_FORMAT_ETC2_R8G8B8A8_UNORM_BLOCK` +- `VK_FORMAT_ETC2_R8G8B8A8_SRGB_BLOCK` +- `VK_FORMAT_EAC_R11_UNORM_BLOCK` +- `VK_FORMAT_EAC_R11_SNORM_BLOCK` +- `VK_FORMAT_EAC_R11G11_UNORM_BLOCK` +- `VK_FORMAT_EAC_R11G11_SNORM_BLOCK` +- `VK_FORMAT_ASTC_4x4_UNORM_BLOCK` +- `VK_FORMAT_ASTC_4x4_SRGB_BLOCK` +- `VK_FORMAT_ASTC_5x4_UNORM_BLOCK` +- `VK_FORMAT_ASTC_5x4_SRGB_BLOCK` +- `VK_FORMAT_ASTC_5x5_UNORM_BLOCK` +- `VK_FORMAT_ASTC_5x5_SRGB_BLOCK` +- `VK_FORMAT_ASTC_6x5_UNORM_BLOCK` +- `VK_FORMAT_ASTC_6x5_SRGB_BLOCK` +- `VK_FORMAT_ASTC_6x6_UNORM_BLOCK` +- `VK_FORMAT_ASTC_6x6_SRGB_BLOCK` +- `VK_FORMAT_ASTC_8x5_UNORM_BLOCK` +- `VK_FORMAT_ASTC_8x5_SRGB_BLOCK` +- `VK_FORMAT_ASTC_8x6_UNORM_BLOCK` +- `VK_FORMAT_ASTC_8x6_SRGB_BLOCK` +- `VK_FORMAT_ASTC_8x8_UNORM_BLOCK` +- `VK_FORMAT_ASTC_8x8_SRGB_BLOCK` +- `VK_FORMAT_ASTC_10x5_UNORM_BLOCK` +- `VK_FORMAT_ASTC_10x5_SRGB_BLOCK` +- `VK_FORMAT_ASTC_10x6_UNORM_BLOCK` +- `VK_FORMAT_ASTC_10x6_SRGB_BLOCK` +- `VK_FORMAT_ASTC_10x8_UNORM_BLOCK` +- `VK_FORMAT_ASTC_10x8_SRGB_BLOCK` +- `VK_FORMAT_ASTC_10x10_UNORM_BLOCK` +- `VK_FORMAT_ASTC_10x10_SRGB_BLOCK` +- `VK_FORMAT_ASTC_12x10_UNORM_BLOCK` +- `VK_FORMAT_ASTC_12x10_SRGB_BLOCK` +- `VK_FORMAT_ASTC_12x12_UNORM_BLOCK` +- `VK_FORMAT_ASTC_12x12_SRGB_BLOCK` +- `VK_FORMAT_G8B8G8R8_422_UNORM` +- `VK_FORMAT_B8G8R8G8_422_UNORM` +- `VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM` +- `VK_FORMAT_G8_B8R8_2PLANE_420_UNORM` +- `VK_FORMAT_G8_B8_R8_3PLANE_422_UNORM` +- `VK_FORMAT_G8_B8R8_2PLANE_422_UNORM` +- `VK_FORMAT_G8_B8_R8_3PLANE_444_UNORM` +- `VK_FORMAT_R10X6_UNORM_PACK16` +- `VK_FORMAT_R10X6G10X6_UNORM_2PACK16` +- `VK_FORMAT_R10X6G10X6B10X6A10X6_UNORM_4PACK16` +- `VK_FORMAT_G10X6B10X6G10X6R10X6_422_UNORM_4PACK16` +- `VK_FORMAT_B10X6G10X6R10X6G10X6_422_UNORM_4PACK16` +- `VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_420_UNORM_3PACK16` +- `VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16` +- `VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_422_UNORM_3PACK16` +- `VK_FORMAT_G10X6_B10X6R10X6_2PLANE_422_UNORM_3PACK16` +- `VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_444_UNORM_3PACK16` +- `VK_FORMAT_R12X4_UNORM_PACK16` +- `VK_FORMAT_R12X4G12X4_UNORM_2PACK16` +- `VK_FORMAT_R12X4G12X4B12X4A12X4_UNORM_4PACK16` +- `VK_FORMAT_G12X4B12X4G12X4R12X4_422_UNORM_4PACK16` +- `VK_FORMAT_B12X4G12X4R12X4G12X4_422_UNORM_4PACK16` +- `VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_420_UNORM_3PACK16` +- `VK_FORMAT_G12X4_B12X4R12X4_2PLANE_420_UNORM_3PACK16` +- `VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_422_UNORM_3PACK16` +- `VK_FORMAT_G12X4_B12X4R12X4_2PLANE_422_UNORM_3PACK16` +- `VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_444_UNORM_3PACK16` +- `VK_FORMAT_G16B16G16R16_422_UNORM` +- `VK_FORMAT_B16G16R16G16_422_UNORM` +- `VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM` +- `VK_FORMAT_G16_B16R16_2PLANE_420_UNORM` +- `VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM` +- `VK_FORMAT_G16_B16R16_2PLANE_422_UNORM` +- `VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM` +- `VK_FORMAT_PVRTC1_2BPP_UNORM_BLOCK_IMG` +- `VK_FORMAT_PVRTC1_4BPP_UNORM_BLOCK_IMG` +- `VK_FORMAT_PVRTC2_2BPP_UNORM_BLOCK_IMG` +- `VK_FORMAT_PVRTC2_4BPP_UNORM_BLOCK_IMG` +- `VK_FORMAT_PVRTC1_2BPP_SRGB_BLOCK_IMG` +- `VK_FORMAT_PVRTC1_4BPP_SRGB_BLOCK_IMG` +- `VK_FORMAT_PVRTC2_2BPP_SRGB_BLOCK_IMG` +- `VK_FORMAT_PVRTC2_4BPP_SRGB_BLOCK_IMG` +- `VK_FORMAT_ASTC_4x4_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_5x4_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_5x5_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_6x5_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_6x6_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_8x5_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_8x6_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_8x8_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_10x5_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_10x6_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_10x8_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_10x10_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_12x10_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_ASTC_12x12_SFLOAT_BLOCK_EXT` +- `VK_FORMAT_G8_B8R8_2PLANE_444_UNORM_EXT` +- `VK_FORMAT_G10X6_B10X6R10X6_2PLANE_444_UNORM_3PACK16_EXT` +- `VK_FORMAT_G12X4_B12X4R12X4_2PLANE_444_UNORM_3PACK16_EXT` +- `VK_FORMAT_G16_B16R16_2PLANE_444_UNORM_EXT` +- `VK_FORMAT_A4B4G4R4_UNORM_PACK16_EXT` +- `VK_FORMAT_G8B8G8R8_422_UNORM_KHR` +- `VK_FORMAT_B8G8R8G8_422_UNORM_KHR` +- `VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM_KHR` +- `VK_FORMAT_G8_B8R8_2PLANE_420_UNORM_KHR` +- `VK_FORMAT_G8_B8_R8_3PLANE_422_UNORM_KHR` +- `VK_FORMAT_G8_B8R8_2PLANE_422_UNORM_KHR` +- `VK_FORMAT_G8_B8_R8_3PLANE_444_UNORM_KHR` +- `VK_FORMAT_R10X6_UNORM_PACK16_KHR` +- `VK_FORMAT_R10X6G10X6_UNORM_2PACK16_KHR` +- `VK_FORMAT_R10X6G10X6B10X6A10X6_UNORM_4PACK16_KHR` +- `VK_FORMAT_G10X6B10X6G10X6R10X6_422_UNORM_4PACK16_KHR` +- `VK_FORMAT_B10X6G10X6R10X6G10X6_422_UNORM_4PACK16_KHR` +- `VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_420_UNORM_3PACK16_KHR` +- `VK_FORMAT_G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16_KHR` +- `VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_422_UNORM_3PACK16_KHR` +- `VK_FORMAT_G10X6_B10X6R10X6_2PLANE_422_UNORM_3PACK16_KHR` +- `VK_FORMAT_G10X6_B10X6_R10X6_3PLANE_444_UNORM_3PACK16_KHR` +- `VK_FORMAT_R12X4_UNORM_PACK16_KHR` +- `VK_FORMAT_R12X4G12X4_UNORM_2PACK16_KHR` +- `VK_FORMAT_R12X4G12X4B12X4A12X4_UNORM_4PACK16_KHR` +- `VK_FORMAT_G12X4B12X4G12X4R12X4_422_UNORM_4PACK16_KHR` +- `VK_FORMAT_B12X4G12X4R12X4G12X4_422_UNORM_4PACK16_KHR` +- `VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_420_UNORM_3PACK16_KHR` +- `VK_FORMAT_G12X4_B12X4R12X4_2PLANE_420_UNORM_3PACK16_KHR` +- `VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_422_UNORM_3PACK16_KHR` +- `VK_FORMAT_G12X4_B12X4R12X4_2PLANE_422_UNORM_3PACK16_KHR` +- `VK_FORMAT_G12X4_B12X4_R12X4_3PLANE_444_UNORM_3PACK16_KHR` +- `VK_FORMAT_G16B16G16R16_422_UNORM_KHR` +- `VK_FORMAT_B16G16R16G16_422_UNORM_KHR` +- `VK_FORMAT_G16_B16_R16_3PLANE_420_UNORM_KHR` +- `VK_FORMAT_G16_B16R16_2PLANE_420_UNORM_KHR` +- `VK_FORMAT_G16_B16_R16_3PLANE_422_UNORM_KHR` +- `VK_FORMAT_G16_B16R16_2PLANE_422_UNORM_KHR` +- `VK_FORMAT_G16_B16_R16_3PLANE_444_UNORM_K` diff --git a/docs/gpu-feature/derivatives-in-compute/derivatives-in-compute.md b/docs/gpu-feature/derivatives-in-compute/derivatives-in-compute.md index 8319202f4d..139111365a 100644 --- a/docs/gpu-feature/derivatives-in-compute/derivatives-in-compute.md +++ b/docs/gpu-feature/derivatives-in-compute/derivatives-in-compute.md @@ -6,4 +6,4 @@ GLSL syntax may also be used, but is not reccomended (`derivative_group_quadsNV` Targets: * **_SPIRV:_** Enables `DerivativeGroupQuadsNV` or `DerivativeGroupLinearNV`. * **_GLSL:_** Enables `derivative_group_quadsNV` or `derivative_group_LinearNV`. -* **_HLSL:_** Does nothing. sm_6_6 is required to use derivatives in compute shaders. HLSL uses an equivlent of `DerivativeGroupQuad`. \ No newline at end of file +* **_HLSL:_** Does nothing. `sm_6_6` is required to use derivatives in compute shaders. HLSL uses an equivlent of `DerivativeGroupQuad`. diff --git a/docs/user-guide/a2-01-spirv-target-specific.md b/docs/user-guide/a2-01-spirv-target-specific.md index e0d6fd69b2..d6d1190bba 100644 --- a/docs/user-guide/a2-01-spirv-target-specific.md +++ b/docs/user-guide/a2-01-spirv-target-specific.md @@ -31,42 +31,42 @@ System-Value semantics The system-value semantics are translated to the following SPIR-V code. -| SV semantic name | SPIR-V code | -|--|--| -| SV_Barycentrics | BuiltIn BaryCoordKHR | -| SV_ClipDistance | BuiltIn ClipDistance | -| SV_CullDistance | BuiltIn CullDistance | -| SV_Coverage | BuiltIn SampleMask | -| SV_CullPrimitive | BuiltIn CullPrimitiveEXT | -| SV_Depth | BuiltIn FragDepth | -| SV_DepthGreaterEqual | BuiltIn FragDepth | -| SV_DepthLessEqual | BuiltIn FragDepth | -| SV_DispatchThreadID | BuiltIn GlobalInvocationId | -| SV_DomainLocation | BuiltIn TessCoord | -| SV_GSInstanceID | BuiltIn InvocationId | -| SV_GroupID | BuiltIn WorkgroupId | -| SV_GroupIndex | BuiltIn LocalInvocationIndex | -| SV_GroupThreadID | BuiltIn LocalInvocationId | -| SV_InnerCoverage | BuiltIn FullyCoveredEXT | -| SV_InsideTessFactor | BuiltIn TessLevelInner | -| SV_InstanceID | BuiltIn InstanceIndex | -| SV_IntersectionAttributes | *Not supported* | -| SV_IsFrontFace | BuiltIn FrontFacing | -| SV_OutputControlPointID | BuiltIn InvocationId | -| SV_PointSizenote | BuiltIn PointSize | -| SV_Position | BuiltIn Position/FragCoord | -| SV_PrimitiveID | BuiltIn PrimitiveId | -| SV_RenderTargetArrayIndex | BuiltIn Layer | -| SV_SampleIndex | BuiltIn SampleId | -| SV_ShadingRate | BuiltIn PrimitiveShadingRateKHR | -| SV_StartVertexLocation | *Not supported* | -| SV_StartInstanceLocation | *Not suported* | -| SV_StencilRef | BuiltIn FragStencilRefEXT | -| SV_Target | Location | -| SV_TessFactor | BuiltIn TessLevelOuter | -| SV_VertexID | BuiltIn VertexIndex | -| SV_ViewID | BuiltIn ViewIndex | -| SV_ViewportArrayIndex | BuiltIn ViewportIndex | +| SV semantic name | SPIR-V code | +|-------------------------------|-----------------------------------| +| `SV_Barycentrics` | `BuiltIn BaryCoordKHR` | +| `SV_ClipDistance` | `BuiltIn ClipDistance` | +| `SV_CullDistance` | `BuiltIn CullDistance` | +| `SV_Coverage` | `BuiltIn SampleMask` | +| `SV_CullPrimitive` | `BuiltIn CullPrimitiveEXT` | +| `SV_Depth` | `BuiltIn FragDepth` | +| `SV_DepthGreaterEqual` | `BuiltIn FragDepth` | +| `SV_DepthLessEqual` | `BuiltIn FragDepth` | +| `SV_DispatchThreadID` | `BuiltIn GlobalInvocationId` | +| `SV_DomainLocation` | `BuiltIn TessCoord` | +| `SV_GSInstanceID` | `BuiltIn InvocationId` | +| `SV_GroupID` | `BuiltIn WorkgroupId` | +| `SV_GroupIndex` | `BuiltIn LocalInvocationIndex` | +| `SV_GroupThreadID` | `BuiltIn LocalInvocationId` | +| `SV_InnerCoverage` | `BuiltIn FullyCoveredEXT` | +| `SV_InsideTessFactor` | `BuiltIn TessLevelInner` | +| `SV_InstanceID` | `BuiltIn InstanceIndex` | +| `SV_IntersectionAttributes` | *Not supported* | +| `SV_IsFrontFace` | `BuiltIn FrontFacing` | +| `SV_OutputControlPointID` | `BuiltIn InvocationId` | +| `SV_PointSizenote` | `BuiltIn PointSize` | +| `SV_Position` | `BuiltIn Position/FragCoord` | +| `SV_PrimitiveID` | `BuiltIn PrimitiveId` | +| `SV_RenderTargetArrayIndex` | `BuiltIn Layer` | +| `SV_SampleIndex` | `BuiltIn SampleId` | +| `SV_ShadingRate` | `BuiltIn PrimitiveShadingRateKHR` | +| `SV_StartVertexLocation` | `*Not supported* | +| `SV_StartInstanceLocation` | `*Not suported* | +| `SV_StencilRef` | `BuiltIn FragStencilRefEXT` | +| `SV_Target` | `Location` | +| `SV_TessFactor` | `BuiltIn TessLevelOuter` | +| `SV_VertexID` | `BuiltIn VertexIndex` | +| `SV_ViewID` | `BuiltIn ViewIndex` | +| `SV_ViewportArrayIndex` | `BuiltIn ViewportIndex` | *Note* that `SV_PointSize` is a unique keyword that HLSL doesn't have. @@ -113,7 +113,7 @@ Slang ignores the keywords above and all of them are treated as `highp`. Supported atomic types for each target -------------------------------------- -Shader Model 6.2 introduced [16-bit scalar types](https://github.com/microsoft/DirectXShaderCompiler/wiki/16-Bit-Scalar-Types) such as float16 and int16_t, but they didn't come with any atomic operations. +Shader Model 6.2 introduced [16-bit scalar types](https://github.com/microsoft/DirectXShaderCompiler/wiki/16-Bit-Scalar-Types) such as `float16` and `int16_t`, but they didn't come with any atomic operations. Shader Model 6.6 introduced [atomic operations for 64-bit integer types and bitwise atomic operations for 32-bit float type](https://microsoft.github.io/DirectX-Specs/d3d/HLSL_SM_6_6_Int64_and_Float_Atomics.html), but 16-bit integer types and 16-bit float types are not a part of it. [GLSL 4.3](https://registry.khronos.org/OpenGL/specs/gl/GLSLangSpec.4.30.pdf) introduced atomic operations for 32-bit integer types. diff --git a/docs/user-guide/a3-01-reference-capability-profiles.md b/docs/user-guide/a3-01-reference-capability-profiles.md index 175a764965..43fe8eedb9 100644 --- a/docs/user-guide/a3-01-reference-capability-profiles.md +++ b/docs/user-guide/a3-01-reference-capability-profiles.md @@ -9,41 +9,41 @@ Capability Profiles > Note: To 'make' your own 'profile's, try mixing capabilities with `-capability`. -sm_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7} +`sm_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7}` * HLSL shader model -vs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7} +`vs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7}` * HLSL shader model + vertex shader -ps_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7} +`ps_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7}` * HLSL shader model + pixel shader -hs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7} +`hs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7}` * HLSL shader model + hull shader -gs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7} +`gs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7}` * HLSL shader model + geometry shader -ds_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7} +`ds_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7}` * HLSL shader model + domain shader -cs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7} +`cs_{4_0,4_1,5_0,5_1,6_0,6_1,6_2,6_3,6_4,6_5,6_6,6_7}` * HLSL shader model + compute shader -ms_6_{5,6,7} +`ms_6_{5,6,7}` * HLSL shader model + mesh shader -as_6_{5,6,7} +`as_6_{5,6,7}` * HLSL shader model + amplification shader -lib_6_{1,2,3,4,5,6,7} +`lib_6_{1,2,3,4,5,6,7}` * HLSL shader model for libraries -glsl_{110,120,130,140,150,330,400,410,420,430,440,450,460} +`glsl_{110,120,130,140,150,330,400,410,420,430,440,450,460}` * GLSL versions -spirv_1_{1,2,3,4,5,6} +`spirv_1_{1,2,3,4,5,6}` * SPIRV versions -metallib_2_{3,4} -* Metal versions \ No newline at end of file +`metallib_2_{3,4}` +* Metal versions diff --git a/extras/formatting.sh b/extras/formatting.sh index 8e44eea81d..f6a3134aac 100755 --- a/extras/formatting.sh +++ b/extras/formatting.sh @@ -9,13 +9,36 @@ check_only=0 no_version_check=0 run_cpp=0 run_yaml=0 +run_markdown=0 run_sh=0 run_cmake=0 run_all=1 +show_help() { + me=$(basename "$0") + cat <] [--cpp] [--yaml] [--md] [--sh] [--cmake] + +Options: + --check-only Check formatting without modifying files + --no-version-check Skip version compatibility checks + --source Path to source directory to format (defaults to parent of script directory) + --cpp Format only C++ files + --yaml Format only YAML/JSON files + --md Format only markdown files + --sh Format only shell script files + --cmake Format only CMake files +EOF +} + while [[ "$#" -gt 0 ]]; do case $1 in - -h | --help) help=1 ;; + -h | --help) + show_help + exit 0 + ;; --check-only) check_only=1 ;; --no-version-check) no_version_check=1 ;; --cpp) @@ -26,6 +49,10 @@ while [[ "$#" -gt 0 ]]; do run_yaml=1 run_all=0 ;; + --md) + run_markdown=1 + run_all=0 + ;; --sh) run_sh=1 run_all=0 @@ -38,29 +65,15 @@ while [[ "$#" -gt 0 ]]; do source_dir="$2" shift ;; + *) + echo "unrecognized argument: $1" + show_help + exit 1 + ;; esac shift done -if [ "$help" ]; then - me=$(basename "$0") - cat <] [--cpp] [--yaml] [--sh] [--cmake] - -Options: - --check-only Check formatting without modifying files - --no-version-check Skip version compatibility checks - --source Path to source directory to format (defaults to parent of script directory) - --cpp Format only C++ files - --yaml Format only YAML/JSON files - --sh Format only shell script files - --cmake Format only CMake files -EOF - exit 0 -fi - cd "$source_dir" || exit 1 require_bin() { @@ -177,18 +190,16 @@ cpp_formatting() { fi } -yaml_json_formatting() { - echo "Formatting yaml and json files..." >&2 - - readarray -t files < <(git ls-files "*.yaml" "*.yml" "*.json" ':!external/**') - +# Format the 'files' array using the prettier tool (abstracted here because +# it's used by markdown and json +prettier_formatting() { if [ "$check_only" -eq 1 ]; then for file in "${files[@]}"; do if ! output=$(prettier "$file" 2>/dev/null); then continue fi if ! diff -q "$file" <(echo "$output") >/dev/null 2>&1; then - diff --color -u --label "$file" --label "$file" "$file" <(echo "$output") + diff --color -u --label "$file" --label "$file" "$file" <(echo "$output") || : exit_code=1 fi done @@ -197,6 +208,22 @@ yaml_json_formatting() { fi } +yaml_json_formatting() { + echo "Formatting yaml and json files..." >&2 + + readarray -t files < <(git ls-files "*.yaml" "*.yml" "*.json" ':!external/**') + + prettier_formatting +} + +markdown_formatting() { + echo "Formatting markdown files..." >&2 + + readarray -t files < <(git ls-files "*.md" ':!external/**') + + prettier_formatting +} + sh_formatting() { echo "Formatting sh files..." >&2 @@ -217,6 +244,7 @@ sh_formatting() { ((run_all || run_sh)) && sh_formatting ((run_all || run_cmake)) && cmake_formatting ((run_all || run_yaml)) && yaml_json_formatting +((run_markdown)) && markdown_formatting ((run_all || run_cpp)) && cpp_formatting exit $exit_code