You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following tests need to reviewed and refactored.
tests/bin/agent.test.ts > Starting the agent in the foreground > should start the agent and clean up the lockfile when a kill signal is received
tests/bin/agent.test.ts > Starting the agent in the background > should start the agent and clean up the lockfile when a kill signal is received
tests/agent/utils.test.ts > spawnBackgroundAgent > Should spawn an agent in the background.
They all spawn an agent and wait a set amount of time before checking if it is online. During CI/CD there is a good chance that the time it takes for the agents to start exceed the arbitrary delay waiting for them to start. this is causing the tests to fail. The reason for the delay is that currently there is no good feedback for when the agent has fully started up. we need to explore a better more robust way to do theses tests, work out a better way to tell if the agent has started.
Additional context
Currently to get things working I've created a simple polling function to check periodically for a true condition.
This too could use some improvement. It will run a function that you provide every delay. if the function returns true or the polling look exceeds the timeout then the expect() is run. It's useful to provide a large delay for waiting for a condition to be true without waiting the full delay. This only solves part of the problem though.
Tasks
Determine a more robust way to do these tests.
Refactor the tests.
Check that tests pass.
Check that CI/CD passes.
The text was updated successfully, but these errors were encountered:
Specification
The following tests need to reviewed and refactored.
They all spawn an agent and wait a set amount of time before checking if it is online. During CI/CD there is a good chance that the time it takes for the agents to start exceed the arbitrary delay waiting for them to start. this is causing the tests to fail. The reason for the delay is that currently there is no good feedback for when the agent has fully started up. we need to explore a better more robust way to do theses tests, work out a better way to tell if the agent has started.
Additional context
Currently to get things working I've created a simple polling function to check periodically for a true condition.
This too could use some improvement. It will run a function that you provide every delay. if the function returns true or the polling look exceeds the timeout then the expect() is run. It's useful to provide a large delay for waiting for a condition to be true without waiting the full delay. This only solves part of the problem though.
Tasks
The text was updated successfully, but these errors were encountered: