The CrimLog API is the backbone of the CrimLog protocol. It is a GraphQL API written in TypeScript with the Nest.js framework. Prisma is used as the ORM and pnpm as the package manager.
- Node.js v16.18 or greater (https://nodejs.org/en/download/releases/)
- pnpm v7.x (https://pnpm.io/installation)
- Git (https://git-scm.com/downloads)
- MongoDB (https://www.mongodb.com/docs/manual/tutorial/getting-started/)
This is a pnpm project, and the usage of other package managers such as npm or yarn is strongly discouraged. After installing Node.js, pnpm can be enabled using the built-in Corepack API. Run the following commands to setup pnpm on your system:
corepack enable
corepack prepare pnpm@latest --activate
Once pnpm has been enabled on your system, navigate to the root directory and execute the command:
pnpm install --frozen-lockfile
This command locally installs all the project dependencies may take some time to run.
Afterwards, setup a template .env
file by copying the contents of the .env.sample
file into a new .env
file located in the root.
CrimLog makes use of several code generation tools to improve the development experience. It is important to regenerate files after making changes to certain areas of the codebase.
Prisma, an ORM, provides complete and thorough TypeScript types for all database models and queries. The schema.prisma
file is the single source of truth for these types.
Whenever the schema.prisma
is updated, the prisma:generate
script will need to be run to regenerate the Prisma types. Alternatively, prisma:generate:w
can be run once to continuously watch the Prisma schema file and regenerate types automatically on save. Run this command to autogenerate the entire Prisma client into your node_modules/
folder (necessary before proceeding to the Database section):
pnpm prisma:generate
The autogenerated Prisma typings are not committed to the source repository.
CrimLog uses a software tool to convert GraphQL schema into TypeScript types. This comes in handy for input types, especially in the case of data validation. All autogenerated typings are stored in the graphql/typings.ts
file and committed to the source repository.
Whenever any GraphQL file is updated, the graphql-codegen
script should be run to regenerate the typings.ts
file:
pnpm graphql-codegen
MongoDB is the API's database provider. Once you've created your own MongoDB instance, obtain the connection string and set that as the value of the DATABASE_URL
environment variable.
To seed your database with some sample data, first run the prisma:push
script. This will apply the Prisma schema to your newly-created Mongo database. Then execute the prisma:seed
script to populate the database.
pnpm prisma:push
pnpm prisma:seed
Nest.js has a CLI that is used for compiling the source files into the local dist/
folder. To compile the API in a development environment, run the command:
pnpm start:dev
Once you see the console should output successful startup messages, you should be able to navigate to http://localhost:3000/graphql and interact with the API through the Apollo Playground.
This command also places the API in watch mode- any changes made to TS files will automatically be recompiled and redeployed by Nest.
At this point, the project should be setup and ready for local development.
When practical, we try to follow the Shopify Official GraphQL Design Guidelines. Some rules we consider to be of particular importance are:
- Rule #18: Only make input fields required if they're actually semantically required for the mutation to proceed.
- Rule #21: Structure mutation inputs to reduce duplication, even if this requires relaxing requiredness constraints on certain fields.
- Rule #17: Prefix mutation names with the object they are mutating for alphabetical grouping (e.g. use orderCancel instead of cancelOrder).
- Rule #14: Write separate mutations for separate logical actions on a resource.
- Rule #8: Always use object references instead of ID fields.
- Exception: mutation inputs
Integration (e2e) testing was designed in accordance with the principles established by nodejs-integration-tests-best-practices. A GitHub workflow is run on all pull requests into dev
that executes the integration test suites defined in the test/
directory.
The boilerplate for an integration test file is fairly minimal, and looking at simple active examples (such as course.e2e-spec.ts
) would serve as a good referenece.
To begin, create a file with the name {entity}.e2e-spec.ts
, where {entity}
is the name of the entity that will be tested in this file. After the file is created, basic boilerplate content can be inserted:
import { _afterAll, _beforeAll } from './hooks';
import { GraphQLClient } from './util';
let api: GraphQLClient;
beforeAll(async () => {
// use common beforeAll code
({ api } = await _beforeAll());
});
afterAll(async () => {
// use common afterAll code
await _afterAll();
});
describe('{Entity Name}', () => {
test('{when x, then y}', async () => {
// Arrange
// create data necessary for the test
// Act
// call the GQL api
// Assert
// use expect() statements
});
});
Replace the text surrounded by curly braces as appropriate. The body of each test should follow the anatomy of AAA- Arrange, Act, Assert. Again, a good way to develop an understanding of the current integration testing process is to check a recently updated .e2e-spec.ts
file for examples.
pnpm test:e2e
command is run- A docker container based off a Mongo replica image is created via docker compose
- a completely empty postgres database is created and its port exposed
- This container is designed to persist in between test runs and is not terminated when a test run completes
- A dummy document is inserted into the
test
database to ensure it persists after the initial db creation
- From the developer's local system, the Prisma CLI is used (via npm) to structure the db schema
- The command
npx prisma db push
is executed (ref)- the entire Prisma schema will be pushed onto the empty mongo database
- The command
- From the developer's local system, the command
prisma db seed
is executed and initial seed data is loaded into the containerized test database- This seed only contains meta/necessary data, such as Courses, Professors, etc
- Each test suite begins executing (see below)
- After setup has completed, each test suite is executed by Jest
- Test suites are defined in the
test/
directory with the name[entity].e2e-spec.ts
- Multiple test suites can be defined per file
- Test suites are defined in the
- All test suites are executed in parallel, but each individual test instead a suite is executed sequentially
- Before each test suite is executed, a new instance of the Nest.js API is created for that suite to test
- A random port number is used to prevent collision
- Each test case inside the suite performs one or more graphql queries/mutations to ensure the expected integrated functionality between the API, the database, and anything in between
- After all tests have completed (whether pass or fail), the Nest.js application created for that specific test suite is destroyed
Rome is the project's linting & formatting tool of choice. It includes several defaults out of the box, which generally serve to improve the developer experience by eliminating compelx configurations and the perpetual debates that often surround specific rules. Its configuration is defined in the root-level rome.json
file.
Another benefit of Rome's lack of configuration options (when compared to alternatives such as ESLint or JSLint) is the "freedom" it can offer developers, even on teams, in making some personal coding style decisions. The Crimlog development team is currently small enough to where a slight degree of flexibility like this can end up making software development a more pleasant process. There is no arbitrary linter established by some senior developer 10 years ago that harasses you for every other line of code that you write. Instead, there's a lightweight, minimal linter that provides occasional suggestions for the purpose of enforcing a high-level coding standard, while still allowing you the freedom to code how you prefer and are used to. The humanity of developers can often be overlooked in work environments, and Crimlog aims to preserve the importance of human idiosyncrasy as much as possible.
Although linting can be performed entirely through the CLI, installing the Rome IDE extension is recommended for convenience. Linting via CLI is managed through npm scripts. pnpm lint
will output detected issues, and pnpm lint:fix
will automatically resolve them (when possible).
Explanation of Crimlog specific linting rules that have been disabled:
Although many uses of the double-bang operator (!!
) are criticized for unnecessary complexity, those attacks often end up being overstatements. The double-bang operator, when used appropriately, provides immediately knowledge to the developer viewing it that the subject value is not a boolean.
JavaScript type coercion, while a beautiful feature, is frequently abused. For example:
if(data) { ... }
The developer reading this without any knowledge of the codebase would have very little clue as to the type of data
. Disregarding TypeScript, because JavaScript is what coerces values at runtime, it is unknown if data
is a boolean, number, string, object, or anything else. Consider, on the other hand:
if(!!data) { ... }
With the double-negation, it is clear to anyone who reads the code in the future that data
is not a boolean. Developers can avoid any trivial mistakes made in the coding or debugging that would treat data
as an explicit boolean during runtime.]
Double-negation is preferred over the Boolean()
constructor because of its shorter character count.
The delete
operator in JavaScript is reasonably safe to use and only inefficent in loops. It's a convenience operator that, when used responsibly, offers improved readability and syntactic simplicity.
Frankly, using the any
type defeats the purpose of writing code in TypeScript over JavaScript in the first place. If dynamic types are desired, simply return to the hassle-free environment of interpreted JavaScript and avoid the headaches associated with turning a dynamically typed language into a compiled one.
Unfortunately, in the JS ecosystem, dynamic types are nearly inevitable, even when using TypeScript. Usage of third-party libraries is a great example. For reasons like this, the any
type is restrictively allowed in the Crimlog API. It is heavily discouraged, and the linter will provide warnings instead of errors. Developers are encouraged to pursue other solutions, such as the unknown
/never
types or Narrowing.
As established above, Rome is also used for formatting, although its functionality is currently limited to JavaScript and TypeScript files. Its configuration is defined in the root-level rome.json
file. To view formatting issues via CLI, use the npm script pnpm format
. To automatically fix formatting issues via CLI, use the npm script pnpm format:fix
.
To supplement areas that the Rome formatter cannot reach, we use Prettier. However, all Prettier formatting is performed at the IDE level, and is not included in the npm depencies or any CI pipelines. IDE-level formatting is achieved through integrations such as the Prettier extension for VSCode.
General formatting settings can be found in .vscode/settings.json
. Here is a brief summary:
- Semicolons are always used
- Single quotes are always used unless impractical (e.g. escaping contractions:
'don\'t do this'
) - Trailing commas are always used on multiline items (including function parameters/arguments)
- LF is the preferred EOL character
- Lines are indented with tabs instead of spaces
- Organize import statements
- External modules appear before relative modules
- All import statements are sorted by module name, ascending
- All named imports are sorted by export name, ascending
- Relative module imports should always be used for local project files
- Relative module file extensions should be omitted wherever possible (e.g.
app.module
instead ofapp.module.ts
)
All TypeScript standards from linting apply. Additionally:
- Explicitly declare types as often as practical
- Prefer
unknown
overany
- Use singular form for top-level entity names
- Plural form may be used for entity fields when the field will contain multiples of something (e.g. an array)