diff --git a/CHANGELOG.md b/CHANGELOG.md index 0170e3c..46ffd1b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,23 @@ # Changelog +## Version 3.0.0 + +### New Features + +- **Official Doc-Comments Support:** We've introduced support for official doc-comments as defined in [RFC145](https://github.com/NixOS/rfcs/pull/145). This enhancement aligns nixdoc with our latest documentation standard. + +### Deprecated Features + +- **Legacy Custom Format:** The custom nixdoc format is now considered a legacy feature. We plan to phase it out in future versions to streamline documentation practices. +- We encourage users to transition to the official doc-comment format introduced in this release. +- For now we will continue to maintain the legacy format, but will not accept new features or enhancements for it. This decision allows for a period of transition to the new documentation practices. + +See [Migration guide](./doc/migration.md) for smooth transition + + by @hsjobeki; co-authored by @mightyiam + + in https://github.com/nix-community/nixdoc/pull/91. + ## 2.7.0 - Added support to customise the attribute set prefix, which was previously hardcoded to `lib`. diff --git a/README.md b/README.md index a593696..e24b741 100644 --- a/README.md +++ b/README.md @@ -10,19 +10,78 @@ function set. ## Comment format -Currently, identifiers are included in the documentation if they have -a preceding comment in multiline syntax `/* something */`. +This tool implements a subset of the doc-comment standard specified in [RFC-145/doc-comments](https://github.com/NixOS/rfcs/blob/master/rfcs/0145-doc-strings.md). +But, it is currently limited to generating documentation for statically analysable attribute paths only. +In the future, it could be the role of a Nix interpreter to obtain the values to be documented and their doc-comments. -Two special line beginnings are recognised: +It is important to start doc-comments with the additional asterisk (`*`) -> `/**` which renders as a doc-comment. + +The content of the doc-comment should conform to the [Commonmark](https://spec.commonmark.org/0.30/) specification. + +### Example + +The following is an example of markdown documentation for new and current users of nixdoc. + +> Sidenote: Indentation is automatically detected and should be consistent across the content. +> +> If you are used to multiline-strings (`''`) in nix this should be intuitive to follow. + +````nix +{ + /** + This function adds two numbers + + # Example + + ```nix + add 4 5 + => + 9 + ``` + + # Type + + ``` + add :: Number -> Number -> Number + ``` + + # Arguments + + a + : The first number + + b + : The second number + + */ + add = a: b: a + b; +} +```` + +> Note: Within nixpkgs the convention of using [definition-lists](https://www.markdownguide.org/extended-syntax/#definition-lists) for documenting arguments has been established. + + +## Custom nixdoc format (Legacy) + +You should consider migrating to the newer format described above. + +See [Migration guide](./doc/migration.md). + +### Comment format (legacy) + +Identifiers are included in the documentation if they have +a preceding comment in multiline syntax `/* something */`. You should consider migrating to the new format described above. + +Two special line beginnings are recognized: * `Example:` Everything following this line will be assumed to be a verbatim usage example. -* `Type:` This line will be interpreted as a faux type signature. +* `Type:` This line will be interpreted as a faux-type signature. These will result in appropriate elements being inserted into the output. -## Function arguments +### Function arguments (legacy) Function arguments can be documented by prefixing them with a comment: diff --git a/doc/migration.md b/doc/migration.md new file mode 100644 index 0000000..257e40e --- /dev/null +++ b/doc/migration.md @@ -0,0 +1,125 @@ +# Migration Guide + +Upgrading from nixdoc <= 2.x.x to >= 3.0.0 + +To leverage the new doc-comment features and prepare for the deprecation of the legacy format, follow these guidelines: + +## Documentation Comments + +- Use double asterisks `/** */` to mark comments intended as documentation. This differentiates them from internal comments and ensures they are properly processed as part of the documentation. + +**Example:** + +`lib/attrsets.nix (old format)` +````nix +/* Filter an attribute set by removing all attributes for which the + given predicate return false. + Example: + filterAttrs (n: v: n == "foo") { foo = 1; bar = 2; } + => { foo = 1; } + Type: + filterAttrs :: (String -> Any -> Bool) -> AttrSet -> AttrSet +*/ +filterAttrs = + # Predicate taking an attribute name and an attribute value, which returns `true` to include the attribute or `false` to exclude the attribute. + pred: + # The attribute set to filter + set: + listToAttrs (concatMap (name: let v = set.${name}; in if pred name v then [(nameValuePair name v)] else []) (attrNames set)); +```` + +-> + +`lib/attrsets.nix (new format)` +````nix +/** + Filter an attribute set by removing all attributes for which the + given predicate return false. + + # Example + + ```nix + filterAttrs (n: v: n == "foo") { foo = 1; bar = 2; } + => { foo = 1; } + ``` + + # Type + + ``` + filterAttrs :: (String -> Any -> Bool) -> AttrSet -> AttrSet + ``` + + # Arguments + + **pred** + : Predicate taking an attribute name and an attribute value, which returns `true` to include the attribute, or `false` to exclude the attribute. + + **set** + : The attribute set to filter +*/ +filterAttrs = + pred: + set: + listToAttrs (concatMap (name: let v = set.${name}; in if pred name v then [(nameValuePair name v)] else []) (attrNames set)); +```` + +## Documenting Arguments + +With the introduction of RFC145, there is a shift in how arguments are documented. While direct "argument" documentation is not specified, you can still document arguments effectively within your doc-comments by writing explicit markdown. + +**Example:** Migrating **Single Argument Documentation** + +The approach to documenting single arguments has evolved. Instead of individual argument comments, document the function and its arguments together. + +> Note: Within nixpkgs the convention of using [definition-lists](https://www.markdownguide.org/extended-syntax/#definition-lists) for documenting arguments has been established. + +```nix +{ + /** + The `id` function returns the provided value unchanged. + + # Arguments + + `x` (Any) + : The value to be returned. + + */ + id = x: x; +} +``` + +If arguments require more complex documentation consider starting an extra section per argument + +```nix +{ + /** + The `id` function returns the provided value unchanged. + + # Arguments + + ## **x** (Any) + (...Some comprehensive documentation) + + */ + id = x: x; +} +``` + +**Example:** Documenting Structured Arguments +Structured arguments can be documented (described in RFC145 as 'lambda formals'), using doc-comments. + +```nix +{ + /** + The `add` function calculates the sum of `a` and `b`. + */ + add = { + /** The first number to add. */ + a, + /** The second number to add. */ + b + }: a + b; +} +``` + +Ensure your documentation comments start with double asterisks to comply with the new standard. The legacy format remains supported for now but will not receive new features. It will be removed once important downstream projects have been migrated. diff --git a/src/comment.rs b/src/comment.rs new file mode 100644 index 0000000..ec28039 --- /dev/null +++ b/src/comment.rs @@ -0,0 +1,110 @@ +use rnix::ast::{self, AstToken}; +use rnix::{match_ast, SyntaxNode}; +use rowan::ast::AstNode; + +/// Implements functions for doc-comments according to rfc145. +pub trait DocComment { + fn doc_text(&self) -> Option<&str>; +} + +impl DocComment for ast::Comment { + /// Function returns the contents of the doc-comment, if the [ast::Comment] is a + /// doc-comment, or None otherwise. + /// + /// Note: [ast::Comment] holds both the single-line and multiline comment. + /// + /// /**{content}*/ + /// -> {content} + /// + /// It is named `doc_text` to complement [ast::Comment::text]. + fn doc_text(&self) -> Option<&str> { + let text = self.syntax().text(); + // Check whether this is a doc-comment + if text.starts_with(r#"/**"#) && self.text().starts_with('*') { + self.text().strip_prefix('*') + } else { + None + } + } +} + +/// Function retrieves a doc-comment from the [ast::Expr] +/// +/// Returns an [Option] of the first suitable doc-comment. +/// Returns [None] in case no suitable comment was found. +/// +/// Doc-comments can appear in two places for any expression +/// +/// ```nix +/// # (1) directly before the expression (anonymous) +/// /** Doc */ +/// bar: bar; +/// +/// # (2) when assigning a name. +/// { +/// /** Doc */ +/// foo = bar: bar; +/// } +/// ``` +/// +/// If the doc-comment is not found in place (1) the search continues at place (2) +/// More precisely before the NODE_ATTRPATH_VALUE (ast) +/// If no doc-comment was found in place (1) or (2) this function returns None. +pub fn get_expr_docs(expr: &SyntaxNode) -> Option { + if let Some(doc) = get_doc_comment(expr) { + // Found in place (1) + doc.doc_text().map(|v| v.to_owned()) + } else if let Some(ref parent) = expr.parent() { + match_ast! { + match parent { + ast::AttrpathValue(_) => { + if let Some(doc_comment) = get_doc_comment(parent) { + doc_comment.doc_text().map(|v| v.to_owned()) + }else{ + None + } + }, + _ => { + // Yet unhandled ast-nodes + None + } + + } + } + // None + } else { + // There is no parent; + // No further places where a doc-comment could be. + None + } +} + +/// Looks backwards from the given expression +/// Only whitespace or non-doc-comments are allowed in between an expression and the doc-comment. +/// Any other Node or Token stops the peek. +fn get_doc_comment(expr: &SyntaxNode) -> Option { + let mut prev = expr.prev_sibling_or_token(); + loop { + match prev { + Some(rnix::NodeOrToken::Token(ref token)) => { + match_ast! { match token { + ast::Whitespace(_) => { + prev = token.prev_sibling_or_token(); + }, + ast::Comment(it) => { + if it.doc_text().is_some() { + break Some(it); + }else{ + //Ignore non-doc comments. + prev = token.prev_sibling_or_token(); + } + }, + _ => { + break None; + } + }} + } + _ => break None, + }; + } +} diff --git a/src/format.rs b/src/format.rs new file mode 100644 index 0000000..9ccb028 --- /dev/null +++ b/src/format.rs @@ -0,0 +1,100 @@ +use textwrap::dedent; + +/// Ensure all lines in a multi-line doc-comments have the same indentation. +/// +/// Consider such a doc comment: +/// +/// ```nix +/// { +/// /* foo is +/// the value: +/// 10 +/// */ +/// foo = 10; +/// } +/// ``` +/// +/// The parser turns this into: +/// +/// ``` +/// foo is +/// the value: +/// 10 +/// ``` +/// +/// +/// where the first line has no leading indentation, and all other lines have preserved their +/// original indentation. +/// +/// What we want instead is: +/// +/// ``` +/// foo is +/// the value: +/// 10 +/// ``` +/// +/// i.e. we want the whole thing to be dedented. To achieve this, we remove all leading whitespace +/// from the first line, and remove all common whitespace from the rest of the string. +pub fn handle_indentation(raw: &str) -> Option { + let result: String = match raw.split_once('\n') { + Some((first, rest)) => { + format!("{}\n{}", first.trim_start(), dedent(rest)) + } + None => raw.into(), + }; + + Some(result.trim().to_owned()).filter(|s| !s.is_empty()) +} + +/// Shift down markdown headings +/// +/// Performs a line-wise matching to '# Heading ' +/// +/// Counts the current numbers of '#' and adds levels: [usize] to them +/// levels := 1; gives +/// '# Heading' -> '## Heading' +/// +/// Commonmark markdown has 6 levels of headings. Everything beyond that (e.g., H7) is not supported and may produce unexpected renderings. +/// by default this function makes sure, headings don't exceed the H6 boundary. +/// levels := 2; +/// ... +/// H4 -> H6 +/// H6 -> H6 +/// +pub fn shift_headings(raw: &str, levels: usize) -> String { + let mut result = String::new(); + for line in raw.split_inclusive('\n') { + if line.trim_start().starts_with('#') { + result.push_str(&handle_heading(line, levels)); + } else { + result.push_str(line); + } + } + result +} + +// Dumb heading parser. +pub fn handle_heading(line: &str, levels: usize) -> String { + let chars = line.chars(); + + // let mut leading_trivials: String = String::new(); + let mut hashes = String::new(); + let mut rest = String::new(); + for char in chars { + match char { + '#' if rest.is_empty() => { + // only collect hashes if no other tokens + hashes.push(char) + } + _ => rest.push(char), + } + } + let new_hashes = match hashes.len() + levels { + // We reached the maximum heading size. + 6.. => "#".repeat(6), + _ => "#".repeat(hashes.len() + levels), + }; + + format!("{new_hashes}{rest}") +} diff --git a/src/legacy.rs b/src/legacy.rs new file mode 100644 index 0000000..ce01d2c --- /dev/null +++ b/src/legacy.rs @@ -0,0 +1,116 @@ +use rnix::{ + ast::{AstToken, Comment, Expr, Lambda, Param}, + SyntaxKind, SyntaxNode, +}; +use rowan::ast::AstNode; + +use crate::{ + commonmark::{Argument, SingleArg}, + format::handle_indentation, + retrieve_doc_comment, +}; + +/// Retrieve documentation comments. +pub fn retrieve_legacy_comment(node: &SyntaxNode, allow_line_comments: bool) -> Option { + // if the current node has a doc comment it'll be immediately preceded by that comment, + // or there will be a whitespace token and *then* the comment tokens before it. We merge + // multiple line comments into one large comment if they are on adjacent lines for + // documentation simplicity. + let mut token = node.first_token()?.prev_token()?; + if token.kind() == SyntaxKind::TOKEN_WHITESPACE { + token = token.prev_token()?; + } + if token.kind() != SyntaxKind::TOKEN_COMMENT { + return None; + } + // if we want to ignore line comments (eg because they may contain deprecation + // comments on attributes) we'll backtrack to the first preceding multiline comment. + while !allow_line_comments && token.text().starts_with('#') { + token = token.prev_token()?; + if token.kind() == SyntaxKind::TOKEN_WHITESPACE { + token = token.prev_token()?; + } + if token.kind() != SyntaxKind::TOKEN_COMMENT { + return None; + } + } + + if token.text().starts_with("/*") { + return Some(Comment::cast(token)?.text().to_string()); + } + + // backtrack to the start of the doc comment, allowing only adjacent line comments. + // we don't care much about optimization here, doc comments aren't long enough for that. + if token.text().starts_with('#') { + let mut result = String::new(); + while let Some(comment) = Comment::cast(token) { + if !comment.syntax().text().starts_with('#') { + break; + } + result.insert_str(0, comment.text().trim()); + let ws = match comment.syntax().prev_token() { + Some(t) if t.kind() == SyntaxKind::TOKEN_WHITESPACE => t, + _ => break, + }; + // only adjacent lines continue a doc comment, empty lines do not. + match ws.text().strip_prefix('\n') { + Some(trail) if !trail.contains('\n') => result.insert(0, ' '), + _ => break, + } + token = match ws.prev_token() { + Some(c) => c, + _ => break, + }; + } + return Some(result); + } + + None +} + +/// Traverse directly chained nix lambdas and collect the identifiers of all lambda arguments +/// until an unexpected AST node is encountered. +pub fn collect_lambda_args(mut lambda: Lambda) -> Vec { + let mut args = vec![]; + + loop { + match lambda.param().unwrap() { + // a variable, e.g. `x:` in `id = x: x` + // Single args are not supported by RFC145, due to ambiguous placement rules. + Param::IdentParam(id) => { + args.push(Argument::Flat(SingleArg { + name: id.to_string(), + doc: handle_indentation( + &retrieve_legacy_comment(id.syntax(), true).unwrap_or_default(), + ), + })); + } + // an ident in a pattern, e.g. `a` in `foo = { a }: a` + Param::Pattern(pat) => { + // collect doc-comments for each lambda formal + // Lambda formals are supported by RFC145 + let pattern_vec: Vec<_> = pat + .pat_entries() + .map(|entry| SingleArg { + name: entry.ident().unwrap().to_string(), + doc: handle_indentation( + &retrieve_doc_comment(entry.syntax(), Some(1)) + .or(retrieve_legacy_comment(entry.syntax(), true)) + .unwrap_or_default(), + ), + }) + .collect(); + + args.push(Argument::Pattern(pattern_vec)); + } + } + + // Curried or not? + match lambda.body() { + Some(Expr::Lambda(inner)) => lambda = inner, + _ => break, + } + } + + args +} diff --git a/src/main.rs b/src/main.rs index 3a0b2a1..2cfaa95 100644 --- a/src/main.rs +++ b/src/main.rs @@ -21,16 +21,25 @@ //! * extract line number & add it to generated output //! * figure out how to specify examples (& leading whitespace?!) +mod comment; mod commonmark; +mod format; +mod legacy; +#[cfg(test)] +mod test; +use crate::{format::handle_indentation, legacy::retrieve_legacy_comment}; + +use self::comment::get_expr_docs; use self::commonmark::*; +use format::shift_headings; +use legacy::collect_lambda_args; use rnix::{ - ast::{AstToken, Attr, AttrpathValue, Comment, Expr, Inherit, Lambda, LetIn, Param}, + ast::{Attr, AttrpathValue, Expr, Inherit, LetIn}, SyntaxKind, SyntaxNode, }; use rowan::{ast::AstNode, WalkEvent}; use std::fs; -use textwrap::dedent; use std::collections::HashMap; use std::io; @@ -70,9 +79,11 @@ struct DocComment { doc: String, /// Optional type annotation for the thing being documented. + /// This is only available as legacy feature doc_type: Option, /// Usage example(s) (interpreted as a single code block) + /// This is only available as legacy feature example: Option, } @@ -102,127 +113,51 @@ impl DocItem { } } -/// Retrieve documentation comments. -fn retrieve_doc_comment(node: &SyntaxNode, allow_line_comments: bool) -> Option { - // if the current node has a doc comment it'll be immediately preceded by that comment, - // or there will be a whitespace token and *then* the comment tokens before it. We merge - // multiple line comments into one large comment if they are on adjacent lines for - // documentation simplicity. - let mut token = node.first_token()?.prev_token()?; - if token.kind() == SyntaxKind::TOKEN_WHITESPACE { - token = token.prev_token()?; - } - if token.kind() != SyntaxKind::TOKEN_COMMENT { - return None; - } - - // if we want to ignore line comments (eg because they may contain deprecation - // comments on attributes) we'll backtrack to the first preceding multiline comment. - while !allow_line_comments && token.text().starts_with('#') { - token = token.prev_token()?; - if token.kind() == SyntaxKind::TOKEN_WHITESPACE { - token = token.prev_token()?; - } - if token.kind() != SyntaxKind::TOKEN_COMMENT { - return None; - } - } - - if token.text().starts_with("/*") { - return Some(Comment::cast(token)?.text().to_string()); - } - - // backtrack to the start of the doc comment, allowing only adjacent line comments. - // we don't care much about optimization here, doc comments aren't long enough for that. - if token.text().starts_with('#') { - let mut result = String::new(); - while let Some(comment) = Comment::cast(token) { - if !comment.syntax().text().starts_with('#') { - break; - } - result.insert_str(0, comment.text().trim()); - let ws = match comment.syntax().prev_token() { - Some(t) if t.kind() == SyntaxKind::TOKEN_WHITESPACE => t, - _ => break, - }; - // only adjacent lines continue a doc comment, empty lines do not. - match ws.text().strip_prefix('\n') { - Some(trail) if !trail.contains('\n') => result.insert(0, ' '), - _ => break, - } - token = match ws.prev_token() { - Some(c) => c, - _ => break, - }; - } - return Some(result); - } - - None +/// Returns a rfc145 doc-comment if one is present +pub fn retrieve_doc_comment(node: &SyntaxNode, shift_headings_by: Option) -> Option { + let doc_comment = get_expr_docs(node); + + doc_comment.map(|doc_comment| { + shift_headings( + &handle_indentation(&doc_comment).unwrap_or(String::new()), + // H1 to H4 can be used in the doc-comment with the current rendering. + // They will be shifted to H3, H6 + // H1 and H2 are currently used by the outer rendering. (category and function name) + shift_headings_by.unwrap_or(2), + ) + }) } /// Transforms an AST node into a `DocItem` if it has a leading /// documentation comment. fn retrieve_doc_item(node: &AttrpathValue) -> Option { - let comment = retrieve_doc_comment(node.syntax(), false)?; let ident = node.attrpath().unwrap(); // TODO this should join attrs() with '.' to handle whitespace, dynamic attrs and string // attrs. none of these happen in nixpkgs lib, and the latter two should probably be // rejected entirely. let item_name = ident.to_string(); - Some(DocItem { - name: item_name, - comment: parse_doc_comment(&comment), - args: vec![], - }) -} - -/// Ensure all lines in a multi-line doc-comments have the same indentation. -/// -/// Consider such a doc comment: -/// -/// ```nix -/// { -/// /* foo is -/// the value: -/// 10 -/// */ -/// foo = 10; -/// } -/// ``` -/// -/// The parser turns this into: -/// -/// ``` -/// foo is -/// the value: -/// 10 -/// ``` -/// -/// -/// where the first line has no leading indentation, and all other lines have preserved their -/// original indentation. -/// -/// What we want instead is: -/// -/// ``` -/// foo is -/// the value: -/// 10 -/// ``` -/// -/// i.e. we want the whole thing to be dedented. To achieve this, we remove all leading whitespace -/// from the first line, and remove all common whitespace from the rest of the string. -fn handle_indentation(raw: &str) -> Option { - let result: String = match raw.split_once('\n') { - Some((first, rest)) => { - format!("{}\n{}", first.trim_start(), dedent(rest)) + let doc_comment = retrieve_doc_comment(node.syntax(), Some(2)); + match doc_comment { + Some(comment) => Some(DocItem { + name: item_name, + comment: DocComment { + doc: comment, + doc_type: None, + example: None, + }, + args: vec![], + }), + // Fallback to legacy comment is there is no doc_comment + None => { + let comment = retrieve_legacy_comment(node.syntax(), false)?; + Some(DocItem { + name: item_name, + comment: parse_doc_comment(&comment), + args: vec![], + }) } - None => raw.into(), - }; - - Some(result.trim().to_owned()).filter(|s| !s.is_empty()) + } } /// Dumb, mutable, hacky doc comment "parser". @@ -266,49 +201,6 @@ fn parse_doc_comment(raw: &str) -> DocComment { } } -/// Traverse a Nix lambda and collect the identifiers of arguments -/// until an unexpected AST node is encountered. -fn collect_lambda_args(mut lambda: Lambda) -> Vec { - let mut args = vec![]; - - loop { - match lambda.param().unwrap() { - // a variable, e.g. `id = x: x` - Param::IdentParam(id) => { - args.push(Argument::Flat(SingleArg { - name: id.to_string(), - doc: handle_indentation( - &retrieve_doc_comment(id.syntax(), true).unwrap_or_default(), - ), - })); - } - // an attribute set, e.g. `foo = { a }: a` - Param::Pattern(pat) => { - // collect doc-comments for each attribute in the set - let pattern_vec: Vec<_> = pat - .pat_entries() - .map(|entry| SingleArg { - name: entry.ident().unwrap().to_string(), - doc: handle_indentation( - &retrieve_doc_comment(entry.syntax(), true).unwrap_or_default(), - ), - }) - .collect(); - - args.push(Argument::Pattern(pattern_vec)); - } - } - - // Curried or not? - match lambda.body() { - Some(Expr::Lambda(inner)) => lambda = inner, - _ => break, - } - } - - args -} - /// Traverse the arena from a top-level SetEntry and collect, where /// possible: /// @@ -409,8 +301,9 @@ fn retrieve_description(nix: &rnix::Root, description: &str, category: &str) -> category, &nix.syntax() .first_child() - .and_then(|node| retrieve_doc_comment(&node, false)) - .and_then(|comment| handle_indentation(&comment)) + .and_then(|node| retrieve_doc_comment(&node, Some(1)) + .or(retrieve_legacy_comment(&node, false))) + .and_then(|doc_item| handle_indentation(&doc_item)) .unwrap_or_default() ) } @@ -438,129 +331,3 @@ fn main() { .expect("Failed to write section") } } - -#[test] -fn test_main() { - let mut output = Vec::new(); - let src = fs::read_to_string("test/strings.nix").unwrap(); - let locs = serde_json::from_str(&fs::read_to_string("test/strings.json").unwrap()).unwrap(); - let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); - let desc = "string manipulation functions"; - let prefix = "lib"; - let category = "strings"; - - // TODO: move this to commonmark.rs - writeln!( - output, - "# {} {{#sec-functions-library-{}}}\n", - desc, category - ) - .expect("Failed to write header"); - - for entry in collect_entries(nix, prefix, category) { - entry - .write_section(&locs, &mut output) - .expect("Failed to write section") - } - - let output = String::from_utf8(output).expect("not utf8"); - - insta::assert_snapshot!(output); -} - -#[test] -fn test_description_of_lib_debug() { - let mut output = Vec::new(); - let src = fs::read_to_string("test/lib-debug.nix").unwrap(); - let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); - let prefix = "lib"; - let category = "debug"; - let desc = retrieve_description(&nix, &"Debug", category); - writeln!(output, "{}", desc).expect("Failed to write header"); - - for entry in collect_entries(nix, prefix, category) { - entry - .write_section(&Default::default(), &mut output) - .expect("Failed to write section") - } - - let output = String::from_utf8(output).expect("not utf8"); - - insta::assert_snapshot!(output); -} - -#[test] -fn test_arg_formatting() { - let mut output = Vec::new(); - let src = fs::read_to_string("test/arg-formatting.nix").unwrap(); - let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); - let prefix = "lib"; - let category = "options"; - - for entry in collect_entries(nix, prefix, category) { - entry - .write_section(&Default::default(), &mut output) - .expect("Failed to write section") - } - - let output = String::from_utf8(output).expect("not utf8"); - - insta::assert_snapshot!(output); -} - -#[test] -fn test_inherited_exports() { - let mut output = Vec::new(); - let src = fs::read_to_string("test/inherited-exports.nix").unwrap(); - let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); - let prefix = "lib"; - let category = "let"; - - for entry in collect_entries(nix, prefix, category) { - entry - .write_section(&Default::default(), &mut output) - .expect("Failed to write section") - } - - let output = String::from_utf8(output).expect("not utf8"); - - insta::assert_snapshot!(output); -} - -#[test] -fn test_line_comments() { - let mut output = Vec::new(); - let src = fs::read_to_string("test/line-comments.nix").unwrap(); - let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); - let prefix = "lib"; - let category = "let"; - - for entry in collect_entries(nix, prefix, category) { - entry - .write_section(&Default::default(), &mut output) - .expect("Failed to write section") - } - - let output = String::from_utf8(output).expect("not utf8"); - - insta::assert_snapshot!(output); -} - -#[test] -fn test_multi_line() { - let mut output = Vec::new(); - let src = fs::read_to_string("test/multi-line.nix").unwrap(); - let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); - let prefix = "lib"; - let category = "let"; - - for entry in collect_entries(nix, prefix, category) { - entry - .write_section(&Default::default(), &mut output) - .expect("Failed to write section") - } - - let output = String::from_utf8(output).expect("not utf8"); - - insta::assert_snapshot!(output); -} diff --git a/src/snapshots/nixdoc__arg_formatting.snap b/src/snapshots/nixdoc__test__arg_formatting.snap similarity index 100% rename from src/snapshots/nixdoc__arg_formatting.snap rename to src/snapshots/nixdoc__test__arg_formatting.snap diff --git a/src/snapshots/nixdoc__description_of_lib_debug.snap b/src/snapshots/nixdoc__test__description_of_lib_debug.snap similarity index 100% rename from src/snapshots/nixdoc__description_of_lib_debug.snap rename to src/snapshots/nixdoc__test__description_of_lib_debug.snap diff --git a/src/snapshots/nixdoc__test__doc_comment.snap b/src/snapshots/nixdoc__test__doc_comment.snap new file mode 100644 index 0000000..c69cec1 --- /dev/null +++ b/src/snapshots/nixdoc__test__doc_comment.snap @@ -0,0 +1,60 @@ +--- +source: src/test.rs +expression: output +--- +## `lib.debug.nixdoc` {#function-library-lib.debug.nixdoc} + +**Type**: `This is a parsed type` + +nixdoc-legacy comment + +::: {.example #function-library-example-lib.debug.nixdoc} +# `lib.debug.nixdoc` usage example + +```nix +This is a parsed example +``` +::: + +## `lib.debug.rfc-style` {#function-library-lib.debug.rfc-style} + +doc comment in markdown format + +## `lib.debug.argumentTest` {#function-library-lib.debug.argumentTest} + +doc comment in markdown format + +### Example (Should be a heading) + +This is just markdown + +Type: (Should NOT be a heading) + +This is just markdown + +structured function argument + +: `formal1` + + : Legacy line comment + + `formal2` + + : Legacy Block + + `formal3` + + : Legacy + multiline + comment + + `formal4` + + : official doc-comment variant + + +## `lib.debug.foo` {#function-library-lib.debug.foo} + +Comment + + diff --git a/src/snapshots/nixdoc__test__doc_comment_section_description.snap b/src/snapshots/nixdoc__test__doc_comment_section_description.snap new file mode 100644 index 0000000..1450900 --- /dev/null +++ b/src/snapshots/nixdoc__test__doc_comment_section_description.snap @@ -0,0 +1,8 @@ +--- +source: src/test.rs +expression: output +--- +# Debug {#sec-functions-library-debug} +Markdown section heading + + diff --git a/src/snapshots/nixdoc__test__headings.snap b/src/snapshots/nixdoc__test__headings.snap new file mode 100644 index 0000000..78deea6 --- /dev/null +++ b/src/snapshots/nixdoc__test__headings.snap @@ -0,0 +1,22 @@ +--- +source: src/test.rs +expression: output +--- +### h1-heading + +#### h2-heading + +##### h3-heading + +###### h4-heading + +This should be h6 + +###### h5-heading + +This should be h6 as well + +###### h6-heading + +This should be h6 as well + diff --git a/src/snapshots/nixdoc__inherited_exports.snap b/src/snapshots/nixdoc__test__inherited_exports.snap similarity index 100% rename from src/snapshots/nixdoc__inherited_exports.snap rename to src/snapshots/nixdoc__test__inherited_exports.snap diff --git a/src/snapshots/nixdoc__line_comments.snap b/src/snapshots/nixdoc__test__line_comments.snap similarity index 100% rename from src/snapshots/nixdoc__line_comments.snap rename to src/snapshots/nixdoc__test__line_comments.snap diff --git a/src/snapshots/nixdoc__main.snap b/src/snapshots/nixdoc__test__main.snap similarity index 100% rename from src/snapshots/nixdoc__main.snap rename to src/snapshots/nixdoc__test__main.snap diff --git a/src/snapshots/nixdoc__multi_line.snap b/src/snapshots/nixdoc__test__multi_line.snap similarity index 100% rename from src/snapshots/nixdoc__multi_line.snap rename to src/snapshots/nixdoc__test__multi_line.snap diff --git a/src/test.rs b/src/test.rs new file mode 100644 index 0000000..f7e4b35 --- /dev/null +++ b/src/test.rs @@ -0,0 +1,182 @@ +use rnix; +use std::fs; + +use std::io::Write; + +use crate::{collect_entries, format::shift_headings, retrieve_description}; + +#[test] +fn test_main() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/strings.nix").unwrap(); + let locs = serde_json::from_str(&fs::read_to_string("test/strings.json").unwrap()).unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let desc = "string manipulation functions"; + let prefix = "lib"; + let category = "strings"; + + // TODO: move this to commonmark.rs + writeln!( + output, + "# {} {{#sec-functions-library-{}}}\n", + desc, category + ) + .expect("Failed to write header"); + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&locs, &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_description_of_lib_debug() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/lib-debug.nix").unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let prefix = "lib"; + let category = "debug"; + let desc = retrieve_description(&nix, &"Debug", category); + writeln!(output, "{}", desc).expect("Failed to write header"); + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&Default::default(), &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_arg_formatting() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/arg-formatting.nix").unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let prefix = "lib"; + let category = "options"; + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&Default::default(), &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_inherited_exports() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/inherited-exports.nix").unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let prefix = "lib"; + let category = "let"; + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&Default::default(), &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_line_comments() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/line-comments.nix").unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let prefix = "lib"; + let category = "let"; + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&Default::default(), &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_multi_line() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/multi-line.nix").unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let prefix = "lib"; + let category = "let"; + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&Default::default(), &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_doc_comment() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/doc-comment.nix").unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let prefix = "lib"; + let category = "debug"; + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&Default::default(), &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_headings() { + let mut output = String::new(); + let src = fs::read_to_string("test/headings.md").unwrap(); + + output = shift_headings(&src, 2); + + insta::assert_snapshot!(output); +} + +#[test] +fn test_doc_comment_section_description() { + let mut output = Vec::new(); + let src = fs::read_to_string("test/doc-comment-sec-heading.nix").unwrap(); + let nix = rnix::Root::parse(&src).ok().expect("failed to parse input"); + let prefix = "lib"; + let category = "debug"; + let desc = retrieve_description(&nix, &"Debug", category); + writeln!(output, "{}", desc).expect("Failed to write header"); + + for entry in collect_entries(nix, prefix, category) { + entry + .write_section(&Default::default(), &mut output) + .expect("Failed to write section") + } + + let output = String::from_utf8(output).expect("not utf8"); + + insta::assert_snapshot!(output); +} diff --git a/test/doc-comment-sec-heading.nix b/test/doc-comment-sec-heading.nix new file mode 100644 index 0000000..c906d30 --- /dev/null +++ b/test/doc-comment-sec-heading.nix @@ -0,0 +1,4 @@ +/** + Markdown section heading +*/ +{}:{} diff --git a/test/doc-comment.nix b/test/doc-comment.nix new file mode 100644 index 0000000..07b29ad --- /dev/null +++ b/test/doc-comment.nix @@ -0,0 +1,61 @@ +{ + # not a doc comment + hidden = a: a; + + /* + nixdoc-legacy comment + + Example: + + This is a parsed example + + Type: + + This is a parsed type + */ + nixdoc = {}; + + /** + doc comment in markdown format + */ + rfc-style = {}; + + /** + doc comment in markdown format + + # Example (Should be a heading) + + This is just markdown + + Type: (Should NOT be a heading) + + This is just markdown + */ + argumentTest = { + # Legacy line comment + formal1, + # Legacy + # Block + formal2, + /* + Legacy + multiline + comment + */ + formal3, + /** + official doc-comment variant + */ + formal4, + + }: + {}; + + # Omitting a doc comment from an attribute doesn't duplicate the previous one + /** Comment */ + foo = 0; + + # This should not have any docs + bar = 1; + +} diff --git a/test/headings.md b/test/headings.md new file mode 100644 index 0000000..bad1393 --- /dev/null +++ b/test/headings.md @@ -0,0 +1,17 @@ +# h1-heading + +## h2-heading + +### h3-heading + +#### h4-heading + +This should be h6 + +##### h5-heading + +This should be h6 as well + +###### h6-heading + +This should be h6 as well