Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot test error reporting #136

Closed
xzyfer opened this issue Nov 16, 2014 · 35 comments · Fixed by #494
Closed

Cannot test error reporting #136

xzyfer opened this issue Nov 16, 2014 · 35 comments · Fixed by #494

Comments

@xzyfer
Copy link
Contributor

xzyfer commented Nov 16, 2014

As it stands it's not possible to assert an error condition. i.e error on duplicate keys in maps sass/libsass#628

There are cases where a spec like the following should be possible https://github.com/sass/sass-spec/pull/133/files.

Maybe we can redirect error output in the assertion some how. Thoughts?

/cc @lunelson @hcatlin

@xzyfer
Copy link
Contributor Author

xzyfer commented Nov 16, 2014

Other issues affected by this.

sass/libsass#317
sass/libsass#325

@lunelson
Copy link
Contributor

If we could build out and standardize the testing setup a bit more, perhaps we could standardize a way of capturing console output to log files. I currently use the following gulpfile inside a /test directory under /sassc to do tests, it isn't set to try to capture errors but something in this direction might be capable of doing the trick

https://gist.github.com/lunelson/b8b9342f764e133dc0d2

@lunelson
Copy link
Contributor

I've update the gist linked above but still having difficult figuring out how to capture stderr to a file. Maybe I am calling sassc in the wrong way here and someone can shed some light on how it can use stdin, stdout, and stderr correctly? Currently it outputs error messages in the terminal when running as a gulp watch and according to the documentation of gulp-run it should be possible to log out stderr too. OTOH there's gulp-exec, gulp-shell, etc.

Anyway it is also not properly streaming it seems, cause I can't get it to work with autoprefixer.

I also obviously wrote this cause I'm impatient with the node-sass update lag ; )

Recopied below:

var gulp         = require('gulp');
var run          = require('gulp-run');
var rename       = require("gulp-rename");

gulp.task('sassc', function () {
  gulp.src('test.scss', { buffer: false })
    .pipe(run('../bin/sassc -s', {verbosity: 1}))
    .on('error', function (err) { this.end(); })
    .pipe(rename(function (path) { path.extname = ".css"; }))
    .pipe(gulp.dest('.'))
});

gulp.task('watch', function () {
    gulp.watch('test.scss', ['sassc']);
});

gulp.task('default', ['watch']);

CC @mgreter

@KittyGiraudel
Copy link
Contributor

Just in case that helps, at sass-compatibility/sass-compatibility.github.io, we have a way to revert a condition for such a test; basically unexpect. For instance, "make sure this output is not equal to this".

Testing @error was actually simple. Here is the test file:

@error "ok";

Then we made sure once compiled that the output is not @error "ok";.

In case that helps.

@mgreter
Copy link
Contributor

mgreter commented Dec 10, 2014

Another way could be to use custom functions. @warn can already be overloaded and I guess it makes sense to do the same with @error (once implemented). I guess we could then use these hooks in the test-suite to increment a counter or something similar, which could then be tested on the output. Just an idea, but could be a worth looking into!?

@mgreter
Copy link
Contributor

mgreter commented Dec 10, 2014

I have something working here, but wanted to get some feedback first on the proposed format.

@function throw($msg){
  @error $msg;
  @return null;
}

.test {
  foo: mock-errors(true);
  bar: throw("fetched");
  content: last-error();
  baz: mock-errors(false);
}

Expected result:

.test {
  content: "fetched"; }

The functions for mock-errors, last-error are declared as custom c functions (currently tested via perl-libsass). Also the @error function is overloaded and "voids" the actual error context, therefore exection is not aborted. I guess the rest is pretty self explanatory. IMO it's a bit clumsy, but gets the job done and should not be that hard to implement for Sassc!

Please let me know what you think!?
Will probably add a the WIP PR too later!

@xzyfer
Copy link
Contributor Author

xzyfer commented Dec 11, 2014

Can the custom extension define mixins?

Ideally we could test @error like below

@function throw($msg) {
  @error $msg;
  @return null;
}

@include mock-errors {
  .test {
    bar: throw("fetched");
    content: last-error();
  }
}

@mgreter
Copy link
Contributor

mgreter commented Dec 11, 2014

I think this should work. I guess the functions are always executed for every use of the extend. But remember that we need to tell it somehow to "ignore" the actual error (like a try or eval block). That's why I had mock-errors(true) in my initial example. This was the easiest way to get this going and it doesn't add anything to the default runtime.

@xzyfer
Copy link
Contributor Author

xzyfer commented Dec 11, 2014

My thinking was the mock-errors mixin would indicate to the extension that mocking is required for this code block. This works thanks to @content. The mock-errors mixin definition would look something like this:

@mixin mock-errors() {
    foo: mock-errors(true);
    @content;
    foo: mock-errors(false);
}

@mgreter
Copy link
Contributor

mgreter commented Dec 11, 2014

My test indicates that this works:

@function throw($msg){
  @error $msg;
  @return null;
}

@mixin mock-errors() {
    foo: mock-errors(true);
    @content;
    foo: mock-errors(false);
}

@include mock-errors {
  .test {
    bar: throw("fetched");
    content: last-error();
  }
}

Expected result:

.test {
  content: "fetched"; }

@xzyfer
Copy link
Contributor Author

xzyfer commented Dec 11, 2014

I guess my question was could the mixing definition itself be declared in
the extension so we can remove 99% of the boilerplate?
On 11 Dec 2014 17:08, "Marcel Greter" [email protected] wrote:

My test indicate that this works:

@function throw($msg){
@error $msg;
@return null;
}

@mixin mock-errors() {
foo: mock-errors(true);
@content;
foo: mock-errors(false);
}

@include mock-errors {
.test {
bar: throw("fetched");
content: last-error();
}
}


Reply to this email directly or view it on GitHub
#136 (comment).

@lunelson
Copy link
Contributor

Will this pick up all error warnings, or only those thrown by @warn or @error directives?
(e.g. the issue referred to in the first post is a warning that should be thrown by the parser)

@xzyfer
Copy link
Contributor Author

xzyfer commented Dec 11, 2014

Good question @lunelson . We've just received two more spec (#167, #168) that require capturing compiler errors.

@KittyGiraudel
Copy link
Contributor

Please forgive any mistake I could tell on this issue, but do we really need to capture compiler errors? Can't we simply assert that the input should not match the (un)expected_output (as I mentioned earlier in the discussion)?

@mgreter
Copy link
Contributor

mgreter commented Dec 11, 2014

@xzyfer: You should be able to first compile/add the boilerplate mixin to the context and afterwards you can use it as normal. I don't see why this shouldn't work! The code is called whenever a Warning or Error is Evaluated or Inspected, which means it is pretty low-level and should catch most to all cases.

@xzyfer
Copy link
Contributor Author

xzyfer commented Dec 11, 2014

I was hoping the boiler plate mixin could be defined in the extension itself. Needing to bootstrap specs isn't ideal.

The code is called whenever a Warning or Error is Evaluated or Inspected

As to @lunelson point, this doesn't sound like it catch the cases where the compiler itself encounters an error i.e. when error(msg, path, position) is called in the parser.cpp or functions.cpp which is the primary use-case for us atm. Am I mistaken?

@KittyGiraudel
Copy link
Contributor

Adding an extra piece of information to discussion.

Now that sass-compatibility uses sass-spec, we cannot do reverse testing, using some kind of unexpected_output.css. Fortunately, we found an easy way to test errors. We have @error message printed in output.

Consider the following input.scss:

@error "Oops!";

A Sass engine supporting @error will return the following output.css:

Oops!

Would this be possible to implement?

@chriseppstein
Copy link
Contributor

Sass has an output format for errors that results in CSS output. IMO, Errors should be compared using that output. We also need to track warnings. For this, I think there should be a file output.stderr. If not present then any stderr output would be a failure, and if present, the stderr output would have to match.

@chriseppstein
Copy link
Contributor

I don't think an unexpected output is required. Simply making an output file that is empty should suffice.

@KittyGiraudel
Copy link
Contributor

Errors should be compared using that output

This.

@mgreter
Copy link
Contributor

mgreter commented Jan 16, 2015

@chriseppstein I used pretty much the same approach for my local test runner. But IMHO there could a use case for having output.css and output.err present. Basically it just means to test stdout and stderr and nothing forbids it to output on both (like i.e. emitting warnings). I made both files optional which simply means that I expect an empty stream. There is no difference in having the file empty or not existing (altough output.css could be used to trigger the test in the first place).

@am11
Copy link
Contributor

am11 commented Mar 8, 2015

Perhaps it would be a good idea to replace Ruby minitest gem with something like C++ minitest to have more native control over memory and std streams to tests other aspects of libsass, beyond Sass code-generation. This way we can also write independent tests for source-map and external plugins in future. One thing we would probably need to write is the coverage report, for which we can hook native lcov or gcov.

Also, this is slightly related: sass/node-sass#633.

@saper
Copy link
Member

saper commented Sep 18, 2015

Before we rewrite a whole thing, I just had a look at the ruby code with my basic understanding of the language and found the following:

  • a test case reports standard output, cleaned standard output, standard error and compile error and the status code resulting from the compilation
  • a runner just loads few files that describe the expected values:
    • "expected standard output"
      *"expected clean standard output" (optional)
    • applicable output style
      and feeds them to the test case
  • a comparator takes standard output, standard error, status, "clean" output and applies some very simple rules to determine if the test failed or succeeded. It currently does not use standard error, but it's there.

It seems to be trivial to add testing against std error, std output or both. You could even have something like "I want this in standard output and not this in standard error". The logic can be pretty arbitrary.

The only real issue is to specify how to express those rules. Something like:

  1. If there is a *testcase*.*outputstyle*.skip file, please skip the test
  2. If there is a *testcase*.*outputstyle*.status file, please match it against the status code returned. Stop checking further. (is it needed at all? probably not)
  3. If there is a *testcase*.*outputstyle*.error file, please compare it against the standard error, fail if different. A zero-length *testcase*.*outputstyle*.error indicates the error output should be empty and no additional checks are to be made. Stop checking further if did a check here..
  4. If there is a *testcase*.*outputstyle*.clean file, please perform the next test against a "cleaned" output.
  5. If there is a *testcase*.*outputstyle*.css file, please compare it against the standard output.

plus checking of some flags.

As I understand the ruby code right now, options 1, 4 and 5 are currently implemented.

There is also a possibility to generate "expected" output files with a command line option; we should decide the logic in which order those files are written, for example:

  1. If the script generates some error output, write .error file. Stop.
  2. If the script generates no error output, write stdout to the.css file.

As I understand the ruby code right now, option 2 is currently implemented.

So once we agree how to specify the expected error output and the logic behind it, it should be trivial to implement.

@mgreter
Copy link
Contributor

mgreter commented Sep 19, 2015

IMHO we just need someone to do the actual heavy lifting! I only fixed/updated about 5% of all error messages yet to conform to ruby sass! So there is another heavy lifting involved!

@saper
Copy link
Member

saper commented Sep 19, 2015

I have a preliminary fix. My first ruby coding! :)

@saper
Copy link
Member

saper commented Sep 19, 2015

comparing stacktraces will be a challenge. sass/libsass#1555 just a tip of the iceberg :)

saper added a commit to saper/sass-spec that referenced this issue Sep 19, 2015
If files "error" and "status" are existing
in the test directory, do expect errors
and test against them.

Also create error and status files on --nuke.
An error will be reported in this case anyway.

Fixes: sass#136
@saper
Copy link
Member

saper commented Sep 19, 2015

First shot: #494

@xzyfer
Copy link
Contributor Author

xzyfer commented Sep 19, 2015

@saper I've done some work to normalise Ruby Sass and LibSass error messages in https://github.com/sass-compatibility/sass-compatibility.github.io/blob/master/Rakefile#L94-L95

Might be of help.

@xzyfer
Copy link
Contributor Author

xzyfer commented Sep 19, 2015

@saper to be honest, as a first pass, just be able to assert that an error should have occurred and did it a big step. Currently when Ruby Sass error the spec fails which is the core problem.

Asserting the error message match can come later.

@saper
Copy link
Member

saper commented Sep 19, 2015

@xzyfer Initially I wants to just test the status code, but it was so easy just to fix it all.

What is the proper incarnation of the test suite with Ruby Sass? I think I get 80 failures using

ruby sass-spec//sass-spec.rb -c ~/sw/sass/bin/sass --ignore-todo sass-spec

This is on ruby sass stable 4ef8e31

saper added a commit to saper/sass-spec that referenced this issue Sep 19, 2015
If files "error" and "status" are existing
in the test directory, do expect errors
and test against them.

Also create error and status files on --nuke.
An error will be reported in this case anyway.

Fixes: sass#136
saper added a commit to saper/sass-spec that referenced this issue Sep 20, 2015
If files "error" and "status" are existing
in the test directory, do expect errors
and test against them.

Also create error and status files on --nuke.
An error will be reported in this case anyway.

Fixes: sass#136
@xzyfer
Copy link
Contributor Author

xzyfer commented Sep 20, 2015

That looks right, although you don't need --ignore-todo since there are technically no todos Ruby Sass. Those failure might be correct if you're using a newer version of Ruby than what's defined in the Gemfile.lock. A recent version of Ruby Sass change how it handle rounding in colour functions which a bunch for tests. That means the tests are correct as of the version defined in the Gemfile.

saper added a commit to saper/sass-spec that referenced this issue Sep 20, 2015
If files "error" and "status" are existing
in the test directory, do expect errors
and test against them.

Also create error and status files on --nuke.
An error will be reported in this case anyway.

Fixes: sass#136
@saper
Copy link
Member

saper commented Sep 20, 2015

I have went through all error-test issues, tested them under #494 and I have posted the results to #502.

@saper
Copy link
Member

saper commented Sep 20, 2015

#502 brings tests formatted for #494 into the test suite. With #494 and #502 merged 100% of tests pass under Ruby Sass 3.4.14.2e0f33b as well as all tests that are not marked as todo pass with libsass.

@xzyfer
Copy link
Contributor Author

xzyfer commented Sep 21, 2015

@saper my preference would be that error message mismatches aren't considered failures for time being. To be quite honest we cannot match some of Ruby Sass error messages without a good chunk of work.

I instead propose that an error test is successful if they both produce errors. Add a --strict flag for failing on error message mismatches. This allows us to continue moving forward without introducing regressions again and without losing a lot of time on exactly match error messages at the cost of feature work.

saper added a commit to saper/sass-spec that referenced this issue Sep 21, 2015
If files "error" and "status" are existing
in the test directory, do expect errors
and test against them.

Also create error and status files on --nuke.
An error will be reported in this case anyway.

Fixes: sass#136

Conflicts:
	lib/sass_spec/runner.rb
saper added a commit to saper/sass-spec that referenced this issue Sep 21, 2015
If files "error" and "status" are existing
in the test directory, do expect errors
and test against them.

Errors returnd by the Sass engines are
now treates as test failuers, not errors.

Also create error and status files with --nuke.
An error will be reported in this case anyway.

Fixes: sass#136
saper added a commit to saper/sass-spec that referenced this issue Sep 21, 2015
If files "error" and "status" are existing
in the test directory, do expect errors
and test against them.

The errors returned by the Sass engines are
now treated as test failures (not fatal errors).

--unexpected-pass reports an error whenever
a test marked as "todo" does pass. Failing
"todo" errors are silently marked as "passed".

Also re-create output test files with --nuke.
"status" and "error" files are re-created if necessary.
Any failures will be reported in this case normally.

Fixes: sass#136
@saper
Copy link
Member

saper commented Sep 21, 2015

Closed via #494. Thank you everyone!

@saper saper closed this as completed Sep 21, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants