Storybook based front-end for Basilisk parachain employing react-use hooks and Apollo Client for data layer.
Use yarn to install dependencies
yarn install
Start Storybook component development environment.
yarn storybook:start
Storybook can be opened at :6006
Run the app in the development mode locally.
Requires to have Basilisk API testnet running and optionally its indexer and processor as well.
yarn start
Open :3000 to view it in the browser.
The page will reload if you make edits.
You will also see any lint errors in the console.
Start tests interactive mode
yarn test
GitHub Actions Workflow is configured for deployment of UI application and Storybooks
at the same time. Each branch develop|feat|fix/**
deploys to appropriate folder in app-builds-gh-pages
branch.
Branch folder contains 2 sub-folders: app
and storybook
for UI app and Storybook builds
accordingly.
App UI builds and Storybooks are hosted in GitHub Pages.
For access to the builds you can use these paths:
- UI app -
https://galacticcouncil.github.io/Basilisk-ui/<folder_name>/<subfolder_name?>/app
- Storybook build -
https://galacticcouncil.github.io/Basilisk-ui/<folder_name>/<subfolder_name?>/storybook
Deployment triggers:
push:
branches:
- develop
- 'fix/**'
- 'feat/**'
- 'release/**'
pull_request:
branches:
- 'develop'
To build optimized production artifacts locally you can run
yarn build
To ensure consistent code across our codebase, we're using both Prettier and ESLint.
You can either use yarn lint / yarn lint --fix
or yarn prettier / yarn prettier --write
,
or make use of the built-in pre-commit prettier & linting for staged files.
This section outlines the approaches followed while implementing this UI as a React app, including the distinction between different application layers.
The presentational layer is used to present and transform the normalized data provided by the composition layer. It begins on the dumb component level,
those are fed data via containers through props. Dumb components should be developed in isolation via storybook to fit the visual/layout/structural design requirements. Dumb components should only hold local state specific to their own presentational logic (e.g. isModalOpen
), and should communicate with their respective parent components via props and handlers (e.g. onClick / handleOnClick
).
Example:
// reuse generated models from the data layer
import { Account } from './generated/graphql'
// data to be presented passed via props
export interface WalletProps {
activeAccount?: Account
onActiveAccountClick: () => void
}
export const Wallet = ({ account, onActiveAccountClick }: WalletProps) => {
return (
<div onClick={(_) => onActiveAccountClick()}>
<p>{account?.name}</p>
</div>
)
}
Our presentation layer testing strategy is based on a combination of storybook stories
and playwright
. Each presentation layer component must have a useful .stories.tsx
file to go along with it.
Storybook serves the .stories.tsx
file, and then we use Playwright to visit that story to test and screenshot each aspect, variation, and interaction.
-
Best to look in this repo at the
.stories.test.ts
files and their corresponding.stories.tsx
files to see how this works -
Generating Screenshots:
-
If you run a
stories.test.tsx
containing a screenshot comparison test, but a screenshot is missing/deleted, then the test will fail. But even though it fails, Playwrite will take a screenshot of whatever is there, and use it for comparison subsequently. It will tell you it did that in the console. You can replace an existing screenshot in this way. -
Find all screenshots in
storybook-testing/screenshots-to-test-against
-
when any screenshot comparison fails, find results in
storybook-testing/results/screenshot-comparison-fails
-
-
Scripts:
-
Running
.stories.test
file(s):-
start storybook (
yarn storybook:start
), then runyarn storybook:test <file name(s)>
- example:
*yarn storybook:test Button.stories.test AssetBalanceInput
AssetBalanceInut
works; thefile-name
doesn't have to be complete
- example:
- Omit
<file name(s)>
to run all the.stories.test
files
-
-
Debugging:
- append
--headed
flag to the above script to see test execution in-browser, and when usingpage.pause
- append
-
CI:
yarn storybook:test:ci
starts storybook for you and runs the entire Storybook + Playwright testing infrastructure.
-
-
Links
The composition layer brings together the presentational layer and the data layer. Instead of dumb components, smart containers should be utilized to orchestrate fetching of the data required for presentational layer. The aforementioned containers should not contain any direct data fetching themselves, but instead they should utilize simple-to-complex GraphQL queries. This ensures a clear separation of concerns, and allows for a transparent data flow and debugging.
One of the major roles of the composition layer is to determine when data should be initially fetched (or subsequently refetched). Since our data layer is powered by the Apollo client, fetching any data means just dispatching a query to the client itself. If data isn't present in the data layer's normalized cache, sending a query will trigger actual fetching of the data - e.g. from a remote source (depending on the underlying data layer implementation).
There are a few approaches to data composition within our UI:
useQuery
- this will immediately request data via the data layer's resolversuseLazyQuery
- this will return a callback, that can be timed or manually executed to request the data at a later time (e.g. after a timeout or on user interaction)constate
- both query types can be contextualized to avoid concurrency issues in a case where multiple containers use the same queries at the same times (at time of rendering)cache.readQuery / cache.readFragment
- this will only read already cached data, without making a roundtrip to the data resolver.
Loading statuses in Apollo are mostly represented in two ways, one is via a loading
property returned from both queries and mutations. The second one is the networkStatus
which is available and updated if notifyOnNetworkStatusChange: true
in the query/mutation options.
Please make sure to set
notifyOnNetworkStatusChange: true
on your queries and mutations.
Example:
import { Wallet as WalletComponent } from './components/Wallet'
import { Query } from './generated/graphql'
export interface GetActiveAccountQueryResponse {
// you have to be extra careful when composing the generated types, this issue leaks to the data layer itself in terms of the data returned from a query
activeAccount: Query['activeAccount']
}
// query
export const GET_ACTIVE_ACCOUNT = gql`
query GetActiveAccount {
activeAccount {
name
id
balances
}
}
`
export const useGetActiveAccountQuery = () =>
useQuery<GetActiveAccountQueryResponse>(GET_ACTIVE_ACCOUNT)
// container
export const Wallet = () => {
// request data from the data layer
const {
data: { activeAccount }
} = useGetActiveAccountQuery()
// render the component with the provided data
return <WalletComponent activeAccount={activeAccount} />
}
The data layer is provided by the Apollo client, containers are not aware of where the data comes from, they only define what data they're interested in. We use Apollo's local resolvers to provide the data that is being requested by containers via queries.
Local resolver is a function that can be parametrized via a query, which returns (resolves) data for the given entity (e.g. Accounts). As far as separation of concerns goes in the data layer itself, the resolver should only parse out the query arguments and call subsequent functions that take care of the data fetching itself.
Fetching of data is facilitated by query resolvers, writing of data (both local and remote) is facilitated by mutation resolvers.
Please refer to
src/hooks/extension
for a simple example of a folder structure & code separation.
Queries and resolvers should be tested in unison, this can be easily done by writing an integration test that sets up a resolver and a component that consumes it via a query. Resolver internals can be mocked in order to make the testing easier. Please look at src/hooks/extension/resolvers/query/extension.test.tsx
for reference.
Overall strategy for data fetching in our UI is to provide the latest possible data using the Basilisk node itself. This ensure that our users are presented with the latest possible data, and can make well informed decision for e.g. trading. Our current backend infrastructure ((Basilisk API)[https://github.com/galacticcouncil/Basilisk-api]) only processes finalized data - which can and will be more than 3 blocks behind the actual latest data on-chain.
Thanks to our local resolver architecture, we can compose various data sources easily and serve them in an unified manner via complex queries.
Example of a query resolver:
export interface PoolResolverArgs {
assetIds?: { assetAId: string, assetBId: string },
id?: string
}
export const getPoolIdFromAssetIds = (...) => {...}
export const withTypename = (pool: Pool) => {
const __typename = pool.__typename === 'LBPPool' ? 'LBPPool' : 'XYKPool';
return {
__typename,
...pool
}
};
/**
* Resolver for the `Pool` entity is a simple factory function,
* accepting dependencies as arguments and returning the resolver itself
*/
export const poolResolverFactory = (
apiInstance: ApiInstance,
) => async (
_obj,
args: PoolResolverArgs
) => {
let poolId = args.assetIds ? await getPoolIdFromAssetIds(args.assetIds) : args.id;
if (!poolId) throw new Error('poolId not found');
const pool = await getPool(apiInstance,poolId);
return withTypename(pool)
}
/**
* In order to access contextual dependencies, the resolver
* must be wrapped as a hook
*/
export const usePoolResolver = () => useCallback(() => {
// polkadot.js api instance from the parent context
const { apiInstance } => usePolkadotJsContext();
return {
Query: {
/**
* Passed by reference, so that apollo can use
* the latest function after the dependencies for the resolver change.
*/
pool: useResolverToRef(useMemo(
() => poolResolverFactory(apiInstance),
[apiInstance, pool]
))
}
}
});
All application layers must be tested as full as it's possible. At the moment we don't set strict rule for tests coverage threshold, but there is temporary soft rule for all new files - each new file must have tests coverage not less than 90%.
We are using conventional commits and pull-requests naming strategy. Specification for conventional naming can be found here.
Commits:
For making conventional commits, you can run yarn commit
and go through commit flow or add
conventional commit message manually.
Pull-request: Each pull-request name must fit to Conventional Commits messaging strategy.
For successful merge of any PR it must fit to the next requirements:
- at least 1 review from repository contributors;
- review from Code Owner;
- working branch must be up to date before merge;
- all conversation must be resolved;
- Semantic Pull Requests check must be successfully passed.
Next types for pull-requests are supported:
feat
fix
docs
style
refactor
perf
test
build
ci
chore
revert
For unit testing we use Jest. All app unit testing configs are defined
in craco.config.js
in jest
section.
For running all unit tests execute next command:
# Local testing
yarn test
# Testing in CI flow
yarn test:ci
Testing process provides code coverage report in terminal as a text output and detailed report
in ./coverage
folder. Moreover, detailed report is used in GH Action testing workflow for publication coverage report as
comment in appropriate pull-request.
./coverage/lcov-report/index.html
can be open in web-browser for
investigation of coverage details.
If you need to test specific file and get code coverage report for this file, use this approach:
yarn test <test-file-name.test.tsx> --collectCoverageOnlyFrom=<tested-file-name.tsx>
# For instance:
yarn test src/hooks/balances/resolvers/query/balances.test.tsx --collectCoverageOnlyFrom=src/hooks/balances/resolvers/query/balances.tsx
- Clone polkadot-dapp repo to
./polkadot-dapp
folder - use Node.js v16.13.2
cd extension
yarn
yarn build
- Unzip newly built archive
master-build
to the same folder as archive's root. All necessary extension files are located inmaster-build
folder which can be used as dapp src root.
GH Actions musk have configured next Repo secrets:
E2E_TEST_ACCOUNT_NAME_ALICE
E2E_TEST_ACCOUNT_PASSWORD_ALICE
E2E_TEST_ACCOUNT_SEED_ALICE
For local running e2e tests root project's folder must contain .env.test.e2e.local
config file with the same
variable definitions as .env.test.e2e.ci
but with replaced __VAR_NAMER__
placeholders to real values (these placeholders are replacing to
repo secrets during GH Actions workflow).
For running e2e test locally you should:
npx playwright install
if necessary- Build UI project
- Run local testnet (with Basilisk-api).
- Run built UI project in local server
http://127.0.0.1:3000
(can beyarn start
) - Run tests with
yarn test:e2e-local
- Check testing results in
ui-app-e2e-results.html
and screenshots in./traces
For running tests Storybook server must be running:
yarn storybook:start
Storybook built can used by any other server in porn 6006
. For instance:
yarn storybook:build
#use node.js server library http-server
http-server storybook-static --port 6006
Run tests:
yarn storybook:test
As watcher library we are using chokidar-cli .
For testing storybook in watch mode Storybook server must be running:
yarn storybook:start
Watcher can be started in separate terminal window:
yarn storybook:test:watch
#or in --headed mode
yarn storybook:test:watch-headed
More details here
You have to use legacy openssl provider in node 17+. Set this to node options
export NODE_OPTIONS=--openssl-legacy-provider