diff --git a/docs/blog/2023-04-19-nx-cloud-3.md b/docs/blog/2023-04-19-nx-cloud-3.md index 160210c526739..52de33d47cb83 100644 --- a/docs/blog/2023-04-19-nx-cloud-3.md +++ b/docs/blog/2023-04-19-nx-cloud-3.md @@ -116,6 +116,6 @@ In addition, we are actively exploring ways to provide advanced analytics for yo - [Nx Docs](/getting-started/intro) - [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2023-06-29-nx-console-gets-lit.md b/docs/blog/2023-06-29-nx-console-gets-lit.md index 1bbe59be224e2..f133dc9183673 100644 --- a/docs/blog/2023-06-29-nx-console-gets-lit.md +++ b/docs/blog/2023-06-29-nx-console-gets-lit.md @@ -332,6 +332,6 @@ If the prettier UI and better performance haven’t convinced you, this surely w - [Nx Docs](/getting-started/intro) - [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2023-09-25-nx-raises.md b/docs/blog/2023-09-25-nx-raises.md new file mode 100644 index 0000000000000..bf19f0d26f8e3 --- /dev/null +++ b/docs/blog/2023-09-25-nx-raises.md @@ -0,0 +1,20 @@ +--- +title: Nx Raises $16M Series A +authors: [Jeff Cross] +tags: [nx] +--- + +Victor and I are excited to announce that Nx has raised another $16M in a Series A funding round with Nexus Venture Partners and a16z! See our announcement video for more, and be sure to check out the [live stream of Nx Conf 2023](https://youtube.com/live/IQ5YyEYZw68?feature=share) tomorrow to see what we’re up to! + +{% youtube src="https://www.youtube.com/embed/KuyYhC4ClW8?si=qoZL6i6X1E7wjChD" %} + +--- + +## Learn more + +- [Nx Docs](/getting-started/intro) +- [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) +- [Nx GitHub](https://github.com/nrwl/nx) +- [Nx Official Discord Server](https://go.nx.dev/community) +- [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) +- [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2023-10-13-nx-conf-2023-recap.md b/docs/blog/2023-10-13-nx-conf-2023-recap.md index fbd8725a95390..307138c233359 100644 --- a/docs/blog/2023-10-13-nx-conf-2023-recap.md +++ b/docs/blog/2023-10-13-nx-conf-2023-recap.md @@ -492,6 +492,6 @@ If you enjoyed these, [subscribe to our YouTube channel](https://www.youtube.com - [Nx Docs](/getting-started/intro) - [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2023-11-08-state-management.md b/docs/blog/2023-11-08-state-management.md new file mode 100644 index 0000000000000..0cffa7282e706 --- /dev/null +++ b/docs/blog/2023-11-08-state-management.md @@ -0,0 +1,623 @@ +--- +title: State Management Nx React Native/Expo Apps with TanStack Query and Redux +authors: [Emily Xiong] +image: '/blog/images/2023-11-08/featured_img.webp' +tags: [nx, React Native] +--- + +There are currently countless numbers of state management libraries out there. This blog will show you how to use state management for React Native in Nx monorepo with [TanStack Query](https://tanstack.com/query/latest) (which happens to use [Nx on their repo](https://cloud.nx.app/orgs/6412ca9d1c251d000efa21ba/workspaces/6412c827e6da5d7b4a0b1fe3/overview)) and Redux. + +This blog will show: + +- How to set up these libraries and their dev tools +- How to build the sample page below in React Native / Expo with state management +- How to do unit testing + +It will call an API and show a cat fact on the page, allowing users to like or dislike the data. + +![](/blog/images/2023-11-08/bodyimg1.webp) + +Github repo: [https://github.com/xiongemi/nx-expo-monorepo](https://github.com/xiongemi/nx-expo-monorepo) + +--- + +## Before We Start + +From [TanStack Query documentation](https://tanstack.com/query/latest/docs/framework/react/guides/does-this-replace-client-state), it says: + +- [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview) is a **server-state** library. +- [Redux](https://redux.js.org/) is a client-state library. + +What is the difference between the server state and the client state? + +In short: + +- Calling an API, dealing with asynchronous data-> server state +- Everything else about UI, dealing with synchronous data -> client state + +## Installation + +To use **[TanStack Query / React Query](https://tanstack.com/query/latest)** for the server state, I need to install: + +- Library: [@tanstack/react-query](https://tanstack.com/query/latest) +- Dev tools: [@tanstack/react-query-devtools](https://tanstack.com/query/latest/docs/framework/react/devtools) + +I will use **Redux** for everything else. + +- Library: [redux](https://github.com/reduxjs/redux), react-redux, @reduxjs/toolkit +- Dev tools: [@redux-devtools/extension](https://github.com/zalmoxisus/redux-devtools-extension) +- Logger: [redux-logger](https://github.com/LogRocket/redux-logger), [@types/redux-logger](https://www.npmjs.com/package/@types/redux-logger) +- Storage: [redux-persist](https://github.com/rt2zz/redux-persist), [@react-native-async-storage/async-storage](https://github.com/react-native-async-storage/async-storage) + +To install all the above packages: + +```shell +#npm +npm install @tanstack/react-query @tanstack/react-query-devtools redux react-redux @reduxjs/toolkit @redux-devtools/extension redux-logger @types/redux-logger redux-persist @react-native-async-storage/async-storage --save-dev + +#yarn +yarn add @tanstack/react-query @tanstack/react-query-devtools redux react-redux @reduxjs/toolkit @redux-devtools/extension redux-logger @types/redux-logger redux-persist @react-native-async-storage/async-storage --dev + +#pnpm +pnpm add @tanstack/react-query @tanstack/react-query-devtools redux react-redux @reduxjs/toolkit @redux-devtools/extension redux-logger @types/redux-logger redux-persist @react-native-async-storage/async-storage --save-dev +``` + +## Server State with React Query + +### Setup Devtools + +First, you need to add React Query / TanStack Query in the `App.tsx`: + +```tsx +import React from 'react'; +import { QueryClient, QueryClientProvider } from '@tanstack/react-query'; +import { ReactQueryDevtools } from '@tanstack/react-query-devtools'; +import { Platform } from 'react-native'; + +const App = () => { + const queryClient = new QueryClient(); + return ( + + {Platform.OS === 'web' && } + ... + + ); +}; + +export default App; +``` + +Note: the [React Query Devtools](https://tanstack.com/query/latest/docs/framework/react/devtools) currently do not support react native, and it only works on the web, so there is a condition: `{ Platform.OS === ‘web’ && }.` + +For the react native apps, in order to use this tool, you need to use [react-native-web](https://necolas.github.io/react-native-web/) to interpolate your native app to the web app first. + +If you open my Expo app on the web by running `nx start cats` and choose the options `Press w │ open web`, you should be able to use the dev tools and see the state of my react queries: + +![](/blog/images/2023-11-08/bodyimg2.webp) + +Or you can run npx nx serve cats to launch the app in a web browser and debug from there. + +### Create a Query + +What is a query? + +> “A query is a declarative dependency on an asynchronous source of data that is tied to a unique key. A query can be used with any Promise-based method (including GET and POST methods) to fetch data from a server.” [(https://tanstack.com/query/v4/docs/react/guides/queries)](https://tanstack.com/query/v4/docs/react/guides/queries) + +Now let’s add our first query. In this example, it will be added under `lib/queries` folder. To create a query to fetch a new fact about cats, run the command: + +```shell +# expo workspace +npx nx generate @nx/expo:lib use-cat-fact --directory=queries + +# react-native workspace +npx nx generate @nx/react-native:lib use-cat-fact --directory=queries +``` + +Or use [Nx Console](/recipes/nx-console): + +![](/blog/images/2023-11-08/bodyimg3.webp) + +Now notice under libs folder, `use-cat-fact` folder got created under `libs/queries`: + +![](/blog/images/2023-11-08/bodyimg4.webp) + +If you use React Native CLI, just add a folder in your workspace root. + +For this app, let’s use this API: [https://catfact.ninja/](https://catfact.ninja/). At `libs/queries/use-cat-fact/src/lib/use-cat-fact.ts`, add code to fetch the data from this API: + +```ts +import { useQuery } from '@tanstack/react-query'; + +export const fetchCatFact = async (): Promise => { + const response = await fetch('https://catfact.ninja/fact'); + const data = await response.json(); + return data.fact; +}; + +export const useCatFact = () => { + return useQuery({ + queryKey: ['cat-fact'], + queryFn: fetchCatFact, + enabled: false, + }); +}; +``` + +Essentially, you have created a custom hook that calls useQuery function from the TanStack Query library. + +### Unit Testing + +If you render this hook directly and run the unit test with the command `npx nx test queries-use-cat-fact`, this error will show up in the console: + +```shell +Invalid hook call. Hooks can only be called inside of the body of a function component. This could happen for one of the following reasons: + 1. You might have mismatching versions of React and the renderer (such as React DOM) + 2. You might be breaking the Rules of Hooks + 3. You might have more than one copy of React in the same app + See https://reactjs.org/link/invalid-hook-call for tips about how to debug and fix this problem. +``` + +To solve this, you need to wrap your component inside the renderHook function from `@testing-library/react-native` library: + +1. **Install Library to Mock Fetch** + +Depending on which library you use to make HTTP requests. (e.g. fetch, axios), you need to install a library to mock the response. + +- If you use `fetch` to fetch data, you need to install `jest*fetch-mock`. +- If you use `axios` to fetch data, you need to install `axio*-mock-adapter`. + +For this example, since it uses `fetch`, you need to install `jest-fetch-mock`: + +```shell +#npm +npm install jest-fetch-mock --save-dev + +#yarn +yard add jest-fetch-mock --dev +``` + +You also need to mock `fetch` library in `libs/queries/use-cat-fact/test-setup.ts`: + +```ts +import fetchMock from 'jest-fetch-mock'; + +fetchMock.enableMocks(); +``` + +2. **Create Mock Query Provider** + +In order to test out `useQuery` hook, you need to wrap it inside a mock `QueryClientProvider`. Since this mock query provider is going to be used more than once, let’s create a library for this wrapper: + +```shell +# expo library +npx nx generate @nx/expo:library test-wrapper --directory=queries + +# react native library +npx nx generate @nx/react-native:library test-wrapper --directory=queries +``` + +Then a component inside this library: + +```shell +# expo library +npx nx generate @nx/expo:component test-wrapper --project=queries-test-wrapper + +# react native library +npx nx generate @nx/react-native:component test-wrapper --project=queries-test-wrapper +``` + +Add the mock `QueryClientProvider` in `libs/queries/test-wrapper/src/lib/test-wrapper/test-wrapper.tsx`: + +```tsx +import { QueryClient, QueryClientProvider } from '@tanstack/react-query'; +import React from 'react'; + +export interface TestWrapperProps { + children: React.ReactNode; +} + +export function TestWrapper({ children }: TestWrapperProps) { + const queryClient = new QueryClient(); + return ( + {children} + ); +} + +export default TestWrapper; +``` + +3. **Use Mock Responses in Unit Test** + +Then this is what the unit test for my query would look like: + +```tsx +import { TestWrapper } from '@nx-expo-monorepo/queries/test-wrapper'; +import { renderHook, waitFor } from '@testing-library/react-native'; +import { useCatFact } from './use-cat-fact'; +import fetchMock from 'jest-fetch-mock'; + +describe('useCatFact', () => { + afterEach(() => { + jest.resetAllMocks(); + }); + + it('status should be success', async () => { + // simulating a server response + fetchMock.mockResponseOnce( + JSON.stringify({ + fact: 'random cat fact', + }) + ); + + const { result } = renderHook(() => useCatFact(), { + wrapper: TestWrapper, + }); + result.current.refetch(); // refetching the query + expect(result.current.isLoading).toBeTruthy(); + + await waitFor(() => expect(result.current.isLoading).toBe(false)); + expect(result.current.isSuccess).toBe(true); + expect(result.current.data).toEqual('random cat fact'); + }); + + it('status should be error', async () => { + fetchMock.mockRejectOnce(); + + const { result } = renderHook(() => useCatFact(), { + wrapper: TestWrapper, + }); + result.current.refetch(); // refetching the query + expect(result.current.isLoading).toBeTruthy(); + + await waitFor(() => expect(result.current.isLoading).toBe(false)); + expect(result.current.isError).toBe(true); + }); +}); +``` + +If you use `axios`, your unit test would look like this: + +```tsx +// If you use axios, your unit test would look like this: +import { TestWrapper } from '@nx-expo-monorepo/queries/test-wrapper'; +import { renderHook, waitFor } from '@testing-library/react-native'; +import { useCatFact } from './use-cat-fact'; +import axios from 'axios'; +import MockAdapter from 'axios-mock-adapter'; + +// This sets the mock adapter on the default instance +const mockAxios = new MockAdapter(axios); + +describe('useCatFact', () => { + afterEach(() => { + mockAxios.reset(); + }); + + it('status should be success', async () => { + // simulating a server response + mockAxios.onGet().replyOnce(200, { + fact: 'random cat fact', + }); + + const { result } = renderHook(() => useCatFact(), { + wrapper: TestWrapper, + }); + result.current.refetch(); // refetching the query + expect(result.current.isLoading).toBeTruthy(); + + await waitFor(() => expect(result.current.isLoading).toBe(false)); + expect(result.current.isSuccess).toBe(true); + expect(result.current.data).toEqual('random cat fact'); + }); + + it('status should be error', async () => { + mockAxios.onGet().replyOnce(500); + + const { result } = renderHook(() => useCatFact(), { + wrapper: TestWrapper, + }); + result.current.refetch(); // refetching the query + expect(result.current.isLoading).toBeTruthy(); + + await waitFor(() => expect(result.current.isLoading).toBe(false)); + expect(result.current.isError).toBe(true); + }); +}); +``` + +Notice that this file imports `TestWrapper` from `@nx-expo-monorepo/queries/test-wrapper`, and it is added to `renderHook` function with `{ wrapper: TestWrapper }`. + +Now you run the test command `nx test queries-use-cat-fact`, it should pass: + +```shell + PASS queries-use-cat-fact libs/queries/use-cat-fact/src/lib/use-cat-fact.spec.ts (5.158 s) + useCatFact + ✓ status should be success (44 ms) + ✓ status should be error (96 ms) +``` + +### Integrate with Component + +Currently `userQuery` returns the following properties: + +- `isLoading` or `status === 'loading'` - The query has no data yet +- `isError` or `status === 'error'` - The query encountered an error +- `isSuccess` or `status === 'success'` - The query was successful and data is available + +Now with components controlled by the server state, you can leverage the above properties and change your component to follow the below pattern: + +```ts +export interface CarouselProps { + isError: boolean; + isLoading: boolean; + isSuccess: boolean; +} + + +export function Carousel({ + isSuccess, + isError, + isLoading, +}: CarouselProps) { + return ( + <> + {isSuccess && ( + ... + )} + {isLoading && ( + ... + )} + {isError && ( + ... + )} + + ); +} + +export default Carousel; +``` + +Then in the parent component, you can use the query created above: + +```tsx +import { useCatFact } from '@nx-expo-monorepo/queries/use-cat-fact'; +import { Carousel } from '@nx-expo-monorepo/ui'; +import React from 'react'; + +export function Facts() { + const { data, isLoading, isSuccess, isError, refetch, isFetching } = + useCatFact(); + + return ( + + ... + ); +} +``` + +If you serve the app on the web and open the [React Query Devtools](https://tanstack.com/query/v4/docs/framework/react/devtools), you should be able to see the query I created `cat-fact` and data in the query. + +![](/blog/images/2023-11-08/bodyimg5.webp) + +--- + +## Redux + +### Create a Library + +First, you need to create a library for redux: + +```shell +# expo library +npx nx generate @nx/expo:lib cat --directory=states + +# react native library +npx nx generate @nx/react-native:lib cat --directory=states +``` + +This should create a folder under libs: + +![](/blog/images/2023-11-08/bodyimg6.webp) + +### Create a State + +For this app, it is going to track when users click the like button, so you need to create a state called `likes`. + +![](/blog/images/2023-11-08/bodyimg7.webp) + +You can use the [Nx Console](/recipes/nx-console) to create a redux slice: + +![](/blog/images/2023-11-08/bodyimg8.webp) + +Or run this command: + +```shell +npx nx generate @nx/react:redux likes --project=states-cat --directory=likes +``` + +Then update the redux slice at `libs/states/cat/src/lib/likes/likes.slice.ts`: + +```ts +import { + createEntityAdapter, + createSelector, + createSlice, + EntityState, +} from '@reduxjs/toolkit'; + +export const LIKES_FEATURE_KEY = 'likes'; + +export interface LikesEntity { + id: string; + content: string; + dateAdded: number; +} + +export type LikesState = EntityState; + +export const likesAdapter = createEntityAdapter(); + +export const initialLikesState: LikesState = likesAdapter.getInitialState(); + +export const likesSlice = createSlice({ + name: LIKES_FEATURE_KEY, + initialState: initialLikesState, + reducers: { + like: likesAdapter.addOne, + remove: likesAdapter.removeOne, + clear: likesAdapter.removeAll, + }, +}); + +/* + * Export reducer for store configuration. + */ +export const likesReducer = likesSlice.reducer; + +export const likesActions = likesSlice.actions; + +const { selectAll } = likesAdapter.getSelectors(); + +const getlikesState = ( + rootState: ROOT +): LikesState => rootState[LIKES_FEATURE_KEY]; + +const selectAllLikes = createSelector(getlikesState, selectAll); + +export const likesSelectors = { + selectAllLikes, +}; +``` + +Every time the “like” button gets clicked, you want to store the content of what users liked. So you need to create an entity to store this information. + +```ts +export interface LikesEntity { + id: string; + content: string; + dateAdded: number; +} +``` + +This state has 3 actions: + +- like: when users click like +- remove: when users cancel the like +- clear: when users clear all the likes + +### Root Store + +Then you have to add the root store and create a transform function to stringify the redux state: + +```html + +``` + +### Connect Redux State with UI + +Then in `apps/cats/src/app/App.tsx`, you have to: + +- wrap the app inside the `StoreProvider` with the root store to connect with the Redux state. +- wrap the app inside `PersistGate` to persist the redux state in the storage + +```tsx +import React from 'react'; +import AsyncStorage from '@react-native-async-storage/async-storage'; +import { PersistGate } from 'redux-persist/integration/react'; +import { + createRootStore, + transformEntityStateToPersist, +} from '@nx-expo-monorepo/states/cat'; +import { Loading } from '@nx-expo-monorepo/ui'; +import { Provider as StoreProvider } from 'react-redux'; + +const App = () => { + const persistConfig = { + key: 'root', + storage: AsyncStorage, + transforms: [transformEntityStateToPersist], + }; + const { store, persistor } = createRootStore(persistConfig); + + return ( + } persistor={persistor}> + ... + + ); +}; + +export default App; +``` + +In your component where the like button is located, you need to dispatch the like action. I created a file at `apps/cats/src/app/facts/facts.props.ts`: + +```ts +import { + likesActions, + LikesEntity, + RootState, +} from '@nx-expo-monorepo/states/cat'; +import { AnyAction, ThunkDispatch } from '@reduxjs/toolkit'; + +const mapDispatchToProps = ( + dispatch: ThunkDispatch +) => { + return { + like(item: LikesEntity) { + dispatch(likesActions.like(item)); + }, + }; +}; + +type mapDispatchToPropsType = ReturnType; + +type FactsProps = mapDispatchToPropsType; + +export { mapDispatchToProps }; +export type { FactsProps }; +``` + +Now you have passed the `like` function to the props of the facts component. Now inside the facts component, you can call the like function from props to dispatch the like action. + +### Debugging + +To debug redux with Expo, I can simply open the Debugger Menu by entering “d” in the console or in the app, then choose the option “Open JS Debugger”. + +![](/blog/images/2023-11-08/bodyimg9.webp) + +Then you can view my redux logs in the JS Debugger console: + +![](/blog/images/2023-11-08/bodyimg10.webp) + +Or you can run `npx nx serve cats` to launch the app in web view. Then you can use Redux Devtools and debug the native app like a web app: + +![](/blog/images/2023-11-08/bodyimg11.webp) + +--- + +## Summary + +Here is a simple app that uses TanStack Query and Redux for state management. These 2 tools are pretty powerful and they manage both server and client state for you, which is easy to scale, test, and debug. + +Nx is a powerful monorepo tool. Together with Nx and these 2 state management tools, it will be very easy to scale up any app. + +- TanStack Query site: [https://tanstack.com/query/latest](https://tanstack.com/query/latest) +- Official @nx/expo plugin: [/nx-api/expo](/nx-api/expo) +- Official @nx/react-native plugin: [/nx-api/react-native](/nx-api/react-native) + +--- + +## Learn more + +- [Nx Docs](/getting-started/intro) +- [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) +- [Nx GitHub](https://github.com/nrwl/nx) +- [Nx Official Discord Server](https://go.nx.dev/community) +- [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) +- [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2023-11-21-ai-assistant.md b/docs/blog/2023-11-21-ai-assistant.md new file mode 100644 index 0000000000000..6d8e1ebcc550f --- /dev/null +++ b/docs/blog/2023-11-21-ai-assistant.md @@ -0,0 +1,314 @@ +--- +title: Nx Docs AI Assistant +authors: [Katerina Skroumpelou] +cover_image: '/blog/images/2023-11-21/featured_img.webp' +tags: [nx, docs, AI] +--- + +## Introduction + +The [Nx Docs AI Assistant](/ai-chat) is a tool designed to provide users with answers straight from the Nx documentation. In this article I will explain how it is built, and how we ensure accuracy and relevance. + +In the end of this document I have added a [“glossary”](#glossary) of terms that are used throughout this document. + +## Why have an AI assistant for documentation? + +First of all, let’s answer this simple question: why do you need an AI assistant for a documentation site in the first place? Using an AI assistant for documentation search and retrieval can offer a number of benefits for both users and authors. For users, the challenges of navigating through a large volume and density of documentation are alleviated. Unlike static keyword matching, AI enables more personalized and contextual search, allowing for more complex or sophisticated queries beyond simple keywords. This creates a dynamic feedback loop where users can ask follow-up questions, mix and combine documents, and ultimately enjoy an enhanced user experience that goes beyond basic documentation retrieval. + +For authors, a docs AI assistant provides valuable insights into user behavior. It can identify the questions users are frequently asking, pointing to areas where more documentation may be needed. Additionally, if the AI consistently provides unsatisfactory or incorrect responses to certain queries, it could highlight unclear or lacking portions of the documentation. This not only allows for targeted improvements but also makes more parts of the documentation easily accessible to users through intelligent linking. Overall, it can enrich user interaction and help with future content strategy. + +## The Nx Docs AI Assistant Workflow + +### Overview + +In a nutshell, the Nx Docs AI Assistant works in the following way: + +1. Split our docs into smaller chunks +2. Create an [embedding](#embeddings) for each chunk +3. Save all these embeddings in [Postgres using pgvector (Supabase!)](https://supabase.com/docs/guides/database/extensions/pgvector) +4. Get question from the user +5. Create embedding for user’s question +6. Perform a vector similarity search on your database — bring back all the chunks of your documentation that are similar to the user’s question +7. Use the [GPT chat completion](https://platform.openai.com/docs/guides/text-generation/chat-completions-api) function. Pass a prompt, the user’s question and the retrieved chunks from the docs. GPT will then try to extract the relevant facts from these chunks, in order to formulate a coherent answer. + +This is based on the Web Q&A Tutorial from OpenAI [(https://platform.openai.com/docs/tutorials/web-qa-embeddings)](https://platform.openai.com/docs/tutorials/web-qa-embeddings) and Supabase’s Vector Search example [(https://supabase.com/docs/guides/ai/examples/nextjs-vector-search)](https://supabase.com/docs/guides/ai/examples/nextjs-vector-search). + +It’s important to note here that we are not “training the model on our docs”. The model is pretrained. We are just giving the model parts of our docs which are relevant to the user’s question, and the model creates a coherent answer to the question. It’s basically like pasting in ChatGPT a docs page and asking it “how do I do that?”. Except in this case, we’re first searching our documentation and giving GPT only the relevant parts (more about how we do that later in this article), which it can “read” and extract information from. + +## Step 1: Preprocessing our docs + +Every few days, we run an [automated script that will generate embeddings](https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/.github/workflows/generate-embeddings.yml) (numeric/vector representations of words and phrases) for our documentation, and store these embeddings in Supabase. As mentioned above, this step has 3 parts: + +### Split our docs into smaller chunks + +Most of this code follows the example from [Supabase’s Clippy](https://github.com/supabase-community/nextjs-openai-doc-search). It breaks the markdown tree into chunks, it keeps the heading and it also creates a checksum, to keep track of changes. + +Ref in the code: [https://github.com/nrwl/nx/blob/0197444df5ea906f38f06913b2bc366e04b0acc2/tools/documentation/create-embeddings/src/main.mts#L66](https://github.com/nrwl/nx/blob/0197444df5ea906f38f06913b2bc366e04b0acc2/tools/documentation/create-embeddings/src/main.mts#L66) + +This part is copied from: [https://github.com/supabase-community/nextjs-openai-doc-search/blob/main/lib/generate-embeddings.ts](https://github.com/supabase-community/nextjs-openai-doc-search/blob/main/lib/generate-embeddings.ts) + +```js +export function processMdxForSearch(content: string) { + // … + const mdTree = fromMarkdown(content, {}); + const sectionTrees = splitTreeBy(mdTree, (node) => node.type === 'heading'); + // … + const sections = sectionTrees.map((tree: any) => { + const [firstNode] = tree.children; + const heading = + firstNode.type === 'heading' ? toString(firstNode) : undefined; + return { + content: toMarkdown(tree), + heading, + slug, + }; + }); + return { + checksum, + sections, + }; +} +``` + +### Create an embedding for each chunk + +Using `openai.embeddings.create` function with the model “text-embedding-ada-002” we are creating an embedding for each chunk. + +Ref in the code: [https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/tools/documentation/create-embeddings/src/main.mts#L314](https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/tools/documentation/create-embeddings/src/main.mts#L314) + +```js +const embeddingResponse = await openai.embeddings.create({ + model: 'text-embedding-ada-002', + input, +}); +``` + +### Save all these embeddings in Postgres using pgvector, on Supabase. + +Store this embedding in Supabase, in a database that has already been created, following the steps mentioned here: + +[https://supabase.com/docs/guides/ai/examples/nextjs-vector-search?database-method=dashboard#prepare-the-database](https://supabase.com/docs/guides/ai/examples/nextjs-vector-search?database-method=dashboard#prepare-the-database) + +Essentially, we are setting up two PostgreSQL tables on Supabase. Then, we are inserting the embeddings into these tables. + +Ref in code: [https://github.com/nrwl/nx/blob/master/tools/documentation/create-embeddings/src/main.mts#L327](https://github.com/nrwl/nx/blob/master/tools/documentation/create-embeddings/src/main.mts#L327) + +```js +const { data: pageSection } = await supabaseClient + .from('nods_page_section') + .insert({ + page_id: page.id, + slug, + heading, + longer_heading, + content, + url_partial, + token_count, + embedding, + }); // … +``` + +## Step 2: User query analysis and search + +When a user poses a question to the assistant, we calculate the embedding for the user’s question. The way we do that is, again, using openai.embeddings.create function with the model text-embedding-ada-002. + +Ref in code: [https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/nx-dev/nx-dev/pages/api/query-ai-handler.ts#L58](https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/nx-dev/nx-dev/pages/api/query-ai-handler.ts#L58) + +```js +const embeddingResponse: OpenAI.Embeddings.CreateEmbeddingResponse = + await openai.embeddings.create({ + model: 'text-embedding-ada-002', + input: sanitizedQuery + getLastAssistantMessageContent(messages), + }); +``` + +The assistant compares the query embedding with these documentation embeddings to identify relevant sections. This comparison is essentially measuring how close the query’s vector is to the documentation vectors. The closer they are, the more related the content. The way this works is that it sends the user’s question embedding to Supabase, to a PostgreSQL function, which runs a vector comparison between the user’s question embedding and the stored embeddings in the table. The PostgreSQL function returns all the similar documentation chunks. + +The function that is used uses the dot product between vectors to calculate similarity. For normalized vectors, the dot product is equivalent to cosine similarity. Specifically, when two vectors A and B are normalized (i.e., their magnitudes are each 1), the cosine similarity between them is the same as their dot product. The OpenAI embeddings are normalized to length 1, so cosine similarity and dot product will produce the same results. + +Ref in code: [https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/nx-dev/nx-dev/pages/api/query-ai-handler.ts#L70](https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/nx-dev/nx-dev/pages/api/query-ai-handler.ts#L70) + +```js +const { data: pageSections } = await supabaseClient.rpc('match_page_sections', { + embedding, + // … +}); +``` + +## Step 3: Generating a Response + +With the relevant sections (documentation chunks) identified and retrieved, GPT (the generative AI) steps in. Using the relevant sections as context and following a systematic approach, GPT crafts a response. + +This approach the AI is instructed to use (in the **prompt**) is the following: + +- Identify CLUES from the query and documentation. +- Deduce REASONING based solely on the provided Nx Documentation. +- EVALUATE its reasoning, ensuring alignment with Nx Documentation. +- Rely on previous messages for contextual continuity. + +### Ensuring Quality + +If there’s no matching section in the documentation for a query, the script throws a “no_results” error. So, after the initial search in the docs (PostgreSQL function), if the search returns no results (no vectors found that are similar enough to the user’s question vector), the process stops, and our Assistant replies that it does not know the answer. + +### The use of useChat function + +It’s necessary here to clarify that we use the useChat [(https://sdk.vercel.ai/docs/api-reference/use-chat)](https://sdk.vercel.ai/docs/api-reference/use-chat) function of the [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction). This function, as mentioned in the docs, does the following: + +> It enables the streaming of chat messages from your AI provider, manages the state for chat input, and updates the UI automatically as new messages are received. + +It essentially takes care of the following things: + +1. You don’t have to worry about manually creating a “messages” array to store your “conversation” (messages you exchange) with the GPT endpoint +2. You don’t have to manually implement the streaming functionality in your UI + +Then, in your React component, you can call this function directly, and get the messages object from it, to render your messages in your UI. It exposes input, handleInputChange and handleSubmit which you can use in your React form, and it will take care of all the rest. You can pass an api string to it, to tell it which endpoint to use as the chat provider. + +### Creating the query + +If you look at our [query-ai-handler.ts](https://github.com/nrwl/nx/blob/76306f0bedc1297b64da6e58b4f7b9c39711cd82/nx-dev/nx-dev/pages/api/query-ai-handler.ts) function, this is an edge function, living under an endpoint, which is called by the useChat function. The request contains the messages array as created by useChat. If we just wanted to create an AI chat with no context, we could directly pass this messages array to the openai.chat.completions.create endpoint, and have our back-and-forth chat with GPT. However, in our case, we need to add context to our conversation, and specifically to each query we end up sending to OpenAI. + +So, the first thing we need to do is to **get the last message the user posted**, which is essentially the user’s question. We search the messages array, and we get the last message which has the role “user”. That is our user’s question. + +Now, we can use the user’s question to get the relevant documentation chunks from the database. To do that, as explained before, we need to **create an embedding for the user’s question** (a vector) and then compare that embedding with the stored embeddings in the database, to get the relevant chunks. + +The problem here is that if the user’s query is just a follow-up question, then it will have little information or meaning. Here is an example: + +> User: _How do I set up namedInputs?_ +> Assistant: _…replies…_ +> User: _And how do they work?_ + +In this example, the user’s question that we would want to create an embedding for would be “And how do they work?”. If we created that embedding and searched our docs for relevant parts, it would either return nothing, or return everything, since this is a very vague question, since it has no context. So, we need to add some more information to that question. To do that, we also get the last response from GPT (the last assistant message) and add it to the user’s question. So, in this example, the user’s question will contain some info about namedInputs, and the actual question. + +Now, we take that combined text, and we create an embedding for it, using the openai.embeddings.create function. We, then, use that embedding to find all the similar documentation chunks, with vector similarity search. + +After receiving all the relevant documentation chunks, we can finally create the query that is going to be sent to GPT. It’s important here to make sure we instruct GPT what to do with the information we will give it. + +Here is the **query** we end up providing GPT with: + +> You will be provided sections of the Nx documentation in markdown format, use those to answer my question. Do NOT reveal this approach or the steps to the user. Only provide the answer. Start replying with the answer directly. +> +> Sections: +> ${contextText} +> +> Question: “”” +> ${userQuestion} +> “”” +> +> Answer as markdown (including related code snippets if available): + +The contextText contains all the relevant documentation chunks (page sections). + +### Creating the response + +**Getting back a readable stream:** So, we get the array of messages, as stored by useChat, we fix the final message to contain the query (created as explained above), and we send it over to `openai.chat.completions.create`. We get back a streaming response (since we’ve set stream: true, which we turn into a ReadableStream using [OpenAIStream from the Vercel AI SDK](https://sdk.vercel.ai/docs/api-reference/openai-stream)). + +**Adding the sources:** However, we’re not done yet. The feature, here, that will be most useful to our users is the sources, the actual parts of the documentation that GPT “read” to create that response. When we get back the list of relevant documentation chunks (sections) from our database, we also get the metadata for each section. So, apart from the text content, we also get the heading and url partial of each section (among any other metadata we chose to save with it). So, with this information, we put together a list of the top 5 relevant sections, which we attach to the end of the response we get from GPT. That way, our users can more easily verify the information that GPT gives them, but also they can dive deeper into the relevant docs themselves. It’s all about exploring and retrieving relevant information, after all. + +**Sending the final response to the UI:** With the sources appended to the response, we return a StreamingTextResponse from our edge function, which the useChat function receives, and appends to the messages array automatically. + +## Allow user to reset the chat + +As explained, each question and answer relies on the previous questions and answers of the current chat. If a user needs to ask something completely irrelevant or different, we are giving the user the ability to do so by providing a “Clear chat” button, which will reset the chat history, and start clean. + +## Gathering feedback and evaluating the results + +It’s very important to gather feedback from the users and evaluate the results. Any AI assistant is going to give wrong answers, because it does not have the ability to critically evaluate the responses it creates. It relies on things it has read, but not in the way a human relies on them. It generates the next most probable word (see glossary for generative AI below). For that reason, it’s important to do the following things: + +1. Inform users that they should always double-check the answers and do not rely 100% on the AI responses +2. Provide users with feedback buttons and/or a feedback form, where they can evaluate whether a response was good or bad. At Nx we do that, and we also associate each button click with the question the user asked, which will give us an idea around which questions the AI gets right or wrong. +3. Have a list of questions that you ask the AI assistant, and evaluate its responses internally. Use these questions as a standard for any changes made in the assistant. + +## Wrapping up + +In this guide, we’ve explored the intricacies of the Nx Docs AI Assistant, an innovative tool that enhances the experience of both users and authors of Nx documentation. From understanding the need for an AI assistant in navigating complex documentation to the detailed workflow of the Nx Docs AI Assistant, we have covered the journey from preprocessing documentation to generating coherent and context-aware responses. + +Let’s see at some key takeaways: + +**Enhanced User Experience:** The AI assistant significantly improves user interaction with documentation by offering personalized, context-aware responses to queries. This not only makes information retrieval more efficient but also elevates the overall user experience. + +**Insights for Authors:** By analyzing frequently asked questions and areas where the AI struggles, authors can pinpoint documentation gaps and areas for improvement, ensuring that the Nx documentation is as clear and comprehensive as possible. + +**OpenAI API utilization:** The use of embeddings, vector similarity search, and GPT’s generative AI capabilities demonstrate a sophisticated approach to AI-driven documentation assistance. This blend of technologies ensures that users receive accurate and relevant responses. + +**Continuous Learning and Improvement:** The system’s design includes mechanisms for gathering user feedback and evaluating AI responses, which are crucial for ongoing refinement and enhancement of the assistant’s capabilities. + +**Transparency and User Trust:** By openly communicating the limitations of the AI and encouraging users to verify the information, the system fosters trust and promotes responsible use of AI technology. + +**Accessibility and Efficiency:** The AI assistant makes Nx documentation more accessible and navigable, especially for complex or nuanced queries, thereby saving time and enhancing productivity and developer experience. + +## Future steps + +OpenAI released the Assistants API, which takes the burden of chunking the docs, creating embeddings, storing the docs in a vector database, and querying that database off the shoulders of the developers. This new API offers all these features out of the box, removing the need to create a customized solution, as the one explained above. It’s still in beta, and it remains to be seen how it’s going to evolve, and if it’s going to overcome some burdens it poses at the moment. You can [read more about the new Assistants API in this blog post](https://pakotinia.medium.com/openais-assistants-api-a-hands-on-demo-110a861cf2d0), which contains a detailed demo on how to use it for documentation q&a. + +## Glossary + +### Core concepts + +I find it useful to start by explaining what some terms — which are going to be used quite a lot throughout this blog post — mean. + +### Embeddings + +#### What they are + +In the context of machine learning, embeddings are a type of representation for text data. Instead of treating words as mere strings of characters, embeddings transform them into **vectors** (lists of numbers) in a way that captures their meanings. In embeddings, vectors are like digital fingerprints for words or phrases, converting their essence into a series of numbers that can be easily analyzed and compared. + +#### Why they matter + +With embeddings, words or phrases with similar meanings end up having **vectors** that are close to each other, making it easier to compare and identify related content. + +### Generative AI + +#### What it is + +Generative AI, the technology driving the Nx Docs AI Assistant, is a subset of AI that’s trained, not just to classify input data, but to generate new content. + +#### How it works + +Generative AI operates like a sophisticated software compiler. Just as a compiler takes in high-level code and translates it into machine instructions, generative AI takes in textual prompts and processes them through layers of neural network operations, resulting in detailed and coherent text outputs. It’s like providing a programmer with a high-level task description, and they write the necessary code to achieve it, except here the ‘programmer’ is the AI, and the ‘code’ is the generated text response. + +#### What Does “Generation” Mean in AI Context? + +In AI, especially with natural language processing models, “generation” refers to the process of producing sequences of data, in our case, text. It’s about creating content that wasn’t explicitly in the training data but follows the same patterns and structures. + +#### How Does GPT Predict the Next Word? + +For our Nx Docs AI assistant we use GPT. GPT, which stands for “Generative Pre-trained Transformer”, works using a predictive mechanism. At its core, it’s trained to predict the next word in a sentence. When you provide GPT with a prompt, it uses that as a starting point and keeps predicting the next word until it completes the response or reaches a set limit. + +It’s like reading a sentence and trying to guess the next word based on what you’ve read so far. GPT does this but by using a massive amount of textual data it has seen during training, enabling it to make highly informed predictions. + +### Context and Prompting — their role in AI models + +#### Context + +In the context of AI, “context” refers to the surrounding information, data, or conditions that provide a framework or background for understanding and interpreting a specific input, ensuring that the AI’s responses or actions are relevant, coherent, and meaningful in a given situation + +#### Prompts + +The prompt acts as an initial “seed” that guides the AI’s output. While the AI is trained on vast amounts of text, it relies on the prompt for context. For example, a prompt like “tell me about cats” might result in a broad answer, but “summarize the history of domesticated cats” narrows the model’s focus. + +By refining prompts, users can better direct the AI’s response, ensuring the output matches their intent. In essence, the prompt is a tool to direct the AI’s vast capabilities to a desired outcome. + +### The GPT Chat Completion Roles + +### System + +The “System” role typically sets the “persona” or the “character” of the AI. It gives high-level instructions on how the model should behave during the conversation. We start the instructions with “You are a knowledgeable Nx representative.” We also instruct the model about the format of its answer: “Your answer should be in the form of a Markdown article”. You can read the full instructions on GitHub. + +#### User + +The “User” role is straightforward. This is the input from the end-user, which the AI responds to. The user’s query becomes the User role message. This role guides what the AI should be talking about in its response. It’s a direct prompt to the AI to generate a specific answer. In our case, we take the user’s query, and we add it in a longer prompt, which specific steps the model must follow (as explained above). That way, the model focuses on the specific steps we’ve laid out, making it the immediate context for generating the answer. This is one more step towards more accurate answers based on our documentation only. Inside the prompt, which has the instructions, and the user’s query, we always add the context text as well, which are the relevant parts that are retrieved from the Nx Documentation. + +#### Assistant + +This role, in the context of OpenAI’s chat models, is the response of the AI. Previous Assistant responses can be included in the chat history to provide context, especially if a conversation has back-and-forth elements. This helps the model generate coherent and contextually relevant responses in a multi-turn conversation. + +--- + +## Learn more + +- [Nx Docs](/getting-started/intro) +- [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) +- [Nx GitHub](https://github.com/nrwl/nx) +- [Nx Official Discord Server](/community) +- [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) +- [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2023-11-22-unit-testing-expo.md b/docs/blog/2023-11-22-unit-testing-expo.md index 92cb27475a916..6fef3b9a4a921 100644 --- a/docs/blog/2023-11-22-unit-testing-expo.md +++ b/docs/blog/2023-11-22-unit-testing-expo.md @@ -330,6 +330,6 @@ With Nx, you do not need to explicitly install any testing library, so you can d - [Add Cypress, Playwright, and Storybook to Nx Expo Apps](https://medium.com/@emilyxiong/add-cypress-playwright-and-storybook-to-nx-expo-apps-1d3e409ce834) - 🧠 [Nx Docs](/getting-started/intro) - 👩‍💻 [Nx GitHub](https://github.com/nrwl/nx) -- 💬 [Nx Community Discord](/community) +- 💬 [Nx Community Discord](https://go.nx.dev/community) - 📹 [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - 🚀 [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2023-12-20-nx-17-2-release.md b/docs/blog/2023-12-20-nx-17-2-release.md index 522f460808c9f..bc7faab6cd325 100644 --- a/docs/blog/2023-12-20-nx-17-2-release.md +++ b/docs/blog/2023-12-20-nx-17-2-release.md @@ -209,6 +209,6 @@ That’s all for now folks! We’re just starting up a new iteration of developm - [Nx Docs](/getting-started/intro) - [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2024-02-05-nx-18-project-crystal.md b/docs/blog/2024-02-05-nx-18-project-crystal.md index 14126cc6f3ef0..2d3523a1848a5 100644 --- a/docs/blog/2024-02-05-nx-18-project-crystal.md +++ b/docs/blog/2024-02-05-nx-18-project-crystal.md @@ -182,6 +182,6 @@ We just released Project Crystal, so this is just the beginning of it. While we - [Nx Docs](/getting-started/intro) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2024-02-06-nuxt-js-support-in-nx.md b/docs/blog/2024-02-06-nuxt-js-support-in-nx.md index 6a06f87ef3c36..f480526d7285a 100644 --- a/docs/blog/2024-02-06-nuxt-js-support-in-nx.md +++ b/docs/blog/2024-02-06-nuxt-js-support-in-nx.md @@ -184,6 +184,6 @@ Whether you're starting a new Nuxt project or looking to enhance an existing one - [Nx Docs](/getting-started/intro) - [X / Twitter](https://twitter.com/nxdevtools) - [LinkedIn](https://www.linkedin.com/company/nrwl) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Community Discord](/community) +- [Nx Community Discord](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app) diff --git a/docs/blog/2024-02-07-fast-effortless-ci.md b/docs/blog/2024-02-07-fast-effortless-ci.md index 2bdb0683b3319..0c988c5180a58 100644 --- a/docs/blog/2024-02-07-fast-effortless-ci.md +++ b/docs/blog/2024-02-07-fast-effortless-ci.md @@ -121,6 +121,6 @@ If you have a task that can’t be run on Nx Agents for some reason, you can eas - [Nx Docs](/getting-started/intro) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2024-02-15-launch-week-recap.md b/docs/blog/2024-02-15-launch-week-recap.md index 7ac61a948efa8..5eab117a7956e 100644 --- a/docs/blog/2024-02-15-launch-week-recap.md +++ b/docs/blog/2024-02-15-launch-week-recap.md @@ -142,6 +142,6 @@ That’s all for now folks! We’re just starting up a new iteration of developm - [Nx Docs](/getting-started/intro) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2024-03-20-why-speed-matters.md b/docs/blog/2024-03-20-why-speed-matters.md index d2cf3266749fb..d44a24c96ac97 100644 --- a/docs/blog/2024-03-20-why-speed-matters.md +++ b/docs/blog/2024-03-20-why-speed-matters.md @@ -99,6 +99,6 @@ Nx provides an unparalleled toolkit for developers and teams looking to optimize - [X / Twitter](https://twitter.com/nxdevtools) - [LinkedIn](https://www.linkedin.com/company/nrwl) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Community Discord](/community) +- [Nx Community Discord](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app) diff --git a/docs/blog/2024-03-21-reliable-ci.md b/docs/blog/2024-03-21-reliable-ci.md index d129f4de3982a..2144fe82e8155 100644 --- a/docs/blog/2024-03-21-reliable-ci.md +++ b/docs/blog/2024-03-21-reliable-ci.md @@ -174,3 +174,15 @@ You can learn more about Nx Cloud on [nx.app](https://nx.app) and Nx open source **Nx Cloud Pro includes a 2-month free trial** that is definitely worth trying out if you're curious what Cloud Pro can do for your CI. You can try out Nx Agents, e2e test splitting, deflaking and more. [Learn more about Nx Cloud Pro.](https://nx.app/campaigns/pro) We also have a **Pro for Startups** plan which offers agents that are 3.5x cheaper than analogous VMs on CircleCI or Github Actions. [Learn more about Nx Pro for Startups.](https://nx.app/campaigns/pro-for-startups) + +--- + +## Learn more + +- [Nx Docs](/getting-started/intro) +- [X / Twitter](https://twitter.com/nxdevtools) +- [LinkedIn](https://www.linkedin.com/company/nrwl) +- [Nx GitHub](https://github.com/nrwl/nx) +- [Nx Community Discord](https://go.nx.dev/community) +- [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) +- [Speed up your CI](https://nx.app) diff --git a/docs/blog/2024-04-19-manage-your-gradle.md b/docs/blog/2024-04-19-manage-your-gradle.md index 860bc8288320d..d811a5c4b856e 100644 --- a/docs/blog/2024-04-19-manage-your-gradle.md +++ b/docs/blog/2024-04-19-manage-your-gradle.md @@ -212,6 +212,6 @@ Here is how to set up Nx with the Gradle workspace. Hopefully, this gives you a - [Nx Docs](/getting-started/intro) - [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/2024-05-08-nx-19-release.md b/docs/blog/2024-05-08-nx-19-release.md index 7d71add6f7410..a6a40fb622ce6 100644 --- a/docs/blog/2024-05-08-nx-19-release.md +++ b/docs/blog/2024-05-08-nx-19-release.md @@ -228,6 +228,6 @@ Zack - [Nx Docs](/getting-started/intro) - [X/Twitter](https://twitter.com/nxdevtools) -- [LinkedIn](https://www.linkedin.com/company/nrwl/) - [Nx GitHub](https://github.com/nrwl/nx) -- [Nx Official Discord Server](/community) +- [Nx Official Discord Server](https://go.nx.dev/community) - [Nx Youtube Channel](https://www.youtube.com/@nxdevtools) - [Speed up your CI](https://nx.app/) diff --git a/docs/blog/authors.json b/docs/blog/authors.json index cc7f6c5317093..c19b4829ac630 100644 --- a/docs/blog/authors.json +++ b/docs/blog/authors.json @@ -46,5 +46,11 @@ "image": "/blog/images/Zack DeRose.jpeg", "twitter": "zackderose", "github": "ZackDeRose" + }, + { + "name": "Jeff Cross", + "image": "/blog/images/Jeff Cross.jpeg", + "twitter": "jeffbcross", + "github": "jeffbcross" } ] diff --git a/docs/blog/images/2023-09-25/featured_img.webp b/docs/blog/images/2023-09-25/featured_img.webp new file mode 100644 index 0000000000000..08bce5a79e31d Binary files /dev/null and b/docs/blog/images/2023-09-25/featured_img.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg1.webp b/docs/blog/images/2023-11-08/bodyimg1.webp new file mode 100644 index 0000000000000..a0f68ed248f1e Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg1.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg10.webp b/docs/blog/images/2023-11-08/bodyimg10.webp new file mode 100644 index 0000000000000..92d2989352631 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg10.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg11.webp b/docs/blog/images/2023-11-08/bodyimg11.webp new file mode 100644 index 0000000000000..6a8d5884065b1 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg11.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg2.webp b/docs/blog/images/2023-11-08/bodyimg2.webp new file mode 100644 index 0000000000000..4fcaedfc973f1 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg2.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg3.webp b/docs/blog/images/2023-11-08/bodyimg3.webp new file mode 100644 index 0000000000000..8a7a81f489dc3 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg3.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg4.webp b/docs/blog/images/2023-11-08/bodyimg4.webp new file mode 100644 index 0000000000000..2d47f52ff3722 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg4.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg5.webp b/docs/blog/images/2023-11-08/bodyimg5.webp new file mode 100644 index 0000000000000..8680579edcf15 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg5.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg6.webp b/docs/blog/images/2023-11-08/bodyimg6.webp new file mode 100644 index 0000000000000..5d36cfc6554de Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg6.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg7.webp b/docs/blog/images/2023-11-08/bodyimg7.webp new file mode 100644 index 0000000000000..dfbcd6670f811 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg7.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg8.webp b/docs/blog/images/2023-11-08/bodyimg8.webp new file mode 100644 index 0000000000000..68fa80e7631b4 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg8.webp differ diff --git a/docs/blog/images/2023-11-08/bodyimg9.webp b/docs/blog/images/2023-11-08/bodyimg9.webp new file mode 100644 index 0000000000000..376c029513c19 Binary files /dev/null and b/docs/blog/images/2023-11-08/bodyimg9.webp differ diff --git a/docs/blog/images/2023-11-08/featured_img.webp b/docs/blog/images/2023-11-08/featured_img.webp new file mode 100644 index 0000000000000..861753a65e4be Binary files /dev/null and b/docs/blog/images/2023-11-08/featured_img.webp differ diff --git a/docs/blog/images/2023-11-21/featured_img.webp b/docs/blog/images/2023-11-21/featured_img.webp new file mode 100644 index 0000000000000..59a765c4cc04e Binary files /dev/null and b/docs/blog/images/2023-11-21/featured_img.webp differ diff --git a/docs/blog/images/authors/Jeff Cross.jpeg b/docs/blog/images/authors/Jeff Cross.jpeg new file mode 100644 index 0000000000000..9a66a91bfe7ab Binary files /dev/null and b/docs/blog/images/authors/Jeff Cross.jpeg differ diff --git a/nx-dev/ui-blog/src/lib/author-detail.tsx b/nx-dev/ui-blog/src/lib/author-detail.tsx index 249407aec13b2..bf499d60bdb57 100644 --- a/nx-dev/ui-blog/src/lib/author-detail.tsx +++ b/nx-dev/ui-blog/src/lib/author-detail.tsx @@ -24,12 +24,14 @@ export default function AuthorDetail({ author }: AuthorDetailProps) { {author.name}