You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had a thought: why don't we have a set of tests that run a set of operations on node's fs, collect the results (e.g., errors, returned data), then run the identical set of operations on Filer+MemoryProvider. Doing so, we should expect that in most cases, Filer should behave in the same way as fs.
There are lots of places where we can't be 100% identical (e.g., sync methods), so we'll never be able to run the entire node fs test suite. However, we're not supposed to let perfect be the enemy of the good, right?
We can do most of what fs does, and it would be nice if we did the same thing as node whenever possible. These tests would be something like ref or snapshot tests, in that you'd expect the same behaviour from two implementations.
I think doing this involves the following:
Create a new directory under tests/, maybe test/node-parity
Figure out how to safely run fs operations without corrupting the user's actual filesystem. node.js must have done something with this already, or the node tests could trash your filesystem. Things like deleting huge amounts of files/directories, filling the disk, etc. are a no-go.
Figure out how to put fs and Filer into the same initial state for tests can run without modification on both implementations. For example, Filer tests often start from /, but that's insane for fs. We'd need some kind of before setup code to create temp dirs or the like, and use the root paths for both sets of test runs.
Figure out how to clean up after each test. With normal Filer tests, we "zero" the db and recreate after each test. That won't work for fs.
Create a command that basically just runs mocha tests/node-parity/**/*.js in node (not browser).
The text was updated successfully, but these errors were encountered:
Looked at node's test approach, and one thing they have is a tmpDir.js module that manages and cleans up a temporary dir for tests to happen within, see https://github.com/nodejs/node/blob/master/test/common/tmpdir.js. NOTE: it seems to use a prefixed name for a given thread, allowing parallel test execution.
Here's an example of it being used, where all test operations happen rooted in the tmpDir:
https://github.com/metarhia/sandboxed-fs is interesting, for locking fs to a directory. Could combine this with the tmpDir idea above. Not sure how hard to work at this, since we could also completely sandbox node, run a vm, etc.
I had a thought: why don't we have a set of tests that run a set of operations on node's
fs
, collect the results (e.g., errors, returned data), then run the identical set of operations onFiler
+MemoryProvider
. Doing so, we should expect that in most cases, Filer should behave in the same way asfs
.There are lots of places where we can't be 100% identical (e.g., sync methods), so we'll never be able to run the entire node
fs
test suite. However, we're not supposed to let perfect be the enemy of the good, right?We can do most of what
fs
does, and it would be nice if we did the same thing as node whenever possible. These tests would be something like ref or snapshot tests, in that you'd expect the same behaviour from two implementations.I think doing this involves the following:
tests/
, maybetest/node-parity
fs
operations without corrupting the user's actual filesystem. node.js must have done something with this already, or the node tests could trash your filesystem. Things like deleting huge amounts of files/directories, filling the disk, etc. are a no-go.fs
andFiler
into the same initial state for tests can run without modification on both implementations. For example, Filer tests often start from/
, but that's insane forfs
. We'd need some kind ofbefore
setup code to create temp dirs or the like, and use the root paths for both sets of test runs.fs
.mocha tests/node-parity/**/*.js
in node (not browser).The text was updated successfully, but these errors were encountered: