-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing approach #2481
Testing approach #2481
Comments
Apart from what I've described above, after creating some more tests I start to wonder how many cases/contexts we should cover per input. For example, for pasting linked text like
and for another two input files (two links and multiline link) the same. So it gives 15 tests for 3 inputs. I think it makes sense for integration tests as it gives good coverage for many different cases/usages. The important think to remember is that input files should vary, each covering different cases/formatting/structure so tests are really not redundant. As for unit tests, which should cover normalisation step which is context independent, there is no need for different contexts per input. |
I started working on lists ckeditor/ckeditor5-paste-from-office#5, which means writing integration tests first and I have some more thoughts about testing. What to test... againEarlier I have mentioned that:
It seems to be sufficient when working with inline styles or links. It gets trickier when working with content which is not represented the same way in a model and in the DOM. For example having a model like: <listItem listIndent="0" listType="bulleted">Item1</listItem>
<listItem listIndent="0" listType="numbered">Item2</listItem> Results in HTML like: <ul>
<li>Item1</li>
</ul>
<ol>
<li>Item2</li>
</ol> so understanding such tests requires the knowledge how exactly model is transformed. Also as Word documents contains and outputs HTML like structure (well, in a big short) it is easier to understand and validate test when there is input HTML structure which should be transformed into output HTML structure and this two can be compared. Model is an intermediate state (important and also should be checked during testing) which can sometimes look quite different than input and expected output. Test generationLooking on the There are basically two approaches. Starting with test like: describe( 'bold within text', () => {
it( 'pastes in the empty editor', () => { ... } );
it( 'pastes in the paragraph', () => { ... } );
it( 'pastes inside another bold element', () => { ... } );
} );
describe( 'italic starting text', () => {
it( 'pastes in the empty editor', () => { ... } );
it( 'pastes in the paragraph', () => { ... } );
it( 'pastes inside another italic element', () => { ... } );
} ); it can be shorten to something like: generateTests( {
'bold within text': {
'pastes in the empty editor': () => { ... },
'pastes in the paragraph': () => { ... },
'pastes inside another bold element': () => { ... }
},
'italic starting text': {
'pastes in the empty editor': () => { ... },
'pastes in the paragraph': () => { ... },
'pastes inside another italic element': () => { ... }
}
); which is a little less code, but the structure is basically the same and it doesn't make test fail any more readable. Instead of whole test functions it could contain only expected output, but then for large chunks of HTML / model data it is super hard to format to be readable. So I'm not happy with it. The more interesting approach would be something like: generateTests( {
'contexts': {
'pastes in the empty editor': 'context html like <p>Foo bar</p>',
'pastes in the paragraph': 'context html',
'pastes inside another bold element': 'context html'
},
'inputs': {
'bold within text': [
'expected output for context 1',
'expected output for context 2',
'expected output for context 3'
],
'italic starting text': [
'expected output for context 1',
'expected output for context 2',
'expected output for context 3'
]
}
} so that each input ( That being said, for the time being I stick to manually creating whole tests same as for |
After F2F discussion with @Reinmar we have established that:
|
Other: Support for pasting flat list and spacing handling for pasted content in Safari. Closes #1. Closes #12. Closes #15.
Unit/integration tests are a crucial part of Paste from Word. Even though some things work out of the box (in most cases) when copying from Word (basic styles, links, etc), they need to be covered with tests to prevent any regressions and have full control over what's going on with the plugin code.
I have created one integration test for basic styles (bold, italic, underline, strikethrough) to see how we may approach creating these tests in a most effective manner.
1. Tests structure
It is important to properly structure tests data so tests can be easily read and modified. Since there can be a lot of input data files, the main idea is to use the following structure:
The nesting depends on how many separate test files there will be for a given plugin. If there is only one, there is no point in creating a separate dir (so there will be
tests/plugin-name.js
). If there are more than one test for a single plugin, then it becomes a directory liketests/plugin-name/test1.js
andtests/plugin-name/test2.js
. For example (one test per plugin):The important thing to mention is that there can be a separate input file for each browser, for each Word version (so worst case scenario Chrome, Firefox, Safari, Edge for both Word 2016 and Word 2013 gives 8 files). Also original Word
.docx
from which input file was generated should be stored (another two files). Since Word output is pretty long, combining these files doesn't make any sense as it would make them completely unreadable.When it comes to storing expected output in a separate file, I am not sure. It is important to test pasting one piece of content in a different contexts (inline/block context, tables) so for one input there will be many expected outputs depending on the context. As long as it is not necessary and because the output is usually much shorter, it could be simply placed inside a test file.
The whole idea of strongly structured tests is the fact that it is much easier to automate things - loading specific input file for specific browser (see section below), automatically loading input files for other Word version without modifying the test file, etc.
2. Testing on different browsers
There are situations when same Word document pasted into different browsers gives different data inside clipboard. That means in such situations, each browser should have its own input file. The expected output is the same for each browser (you paste the same data, you expect the same result - only intermediate data is different due to different browser behaviour). So there might be:
input1.chrome.html
input1.firefox.html
input1.safari.html
input1.edge.html
My assumption is that each input should be only used for a specific browser, there is no point in running test with
input1.firefox.html
input on e.g. Chrome, because such data will never appear there.Even if we assume that normalisation process is browser independent (so test for specific browser should not fail on others), which I am not sure ATM, it is still much more efficient to run each test only on its designated browser (4 browsers x 1 test or 4 browsers x 4 test).
For such implementation we will need a mechanism to detect browser and to correctly serve input file inside tests based on detected browser.
3. What to test?
Paste from Word will be basically a 2-step process:
view
and then applied tomodel
(here betweenview
->model
filtering takes place) thenview
and finallyDOM
structure.Results of both steps needs to be validated with tests (normalised Word input and final editor content. However, when checking final editor content, tests may check
model
,view
orDOM
(or all of them). As themodel
will represent the final data which is then transformed toview
andDOM
checking all 3 layers seems to be redundant. Checkingmodel
only should be sufficient.The text was updated successfully, but these errors were encountered: