Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial user research #256

Closed
8 tasks done
Tracked by #243
RobTobias123 opened this issue Nov 9, 2023 · 45 comments
Closed
8 tasks done
Tracked by #243

Initial user research #256

RobTobias123 opened this issue Nov 9, 2023 · 45 comments
Assignees
Labels
Status: Completed Nothing further to be done with this issue. Awaiting to be closed Type: Documentation Improvements or additions to documentation

Comments

@RobTobias123
Copy link
Contributor

RobTobias123 commented Nov 9, 2023

Outcome

Gaining a deeper understanding of how people currently use our documents, and get insights on how to create an improved experience for them.

Scope

  • Identify potential participants and personas (user type)
  • Set up 1:1 sessions with those users that may be interested in taking part (designers/devs/other potential)
  • Devise interview questions (userzoom)
  • Put results into an empathy map (say/dows/thinks/feels) with them 1:1 perhaps on Miro?/post-its - based on current docs and the interviewee's current workflow
  • Converge with the Nucleus team and cluster and synthesise results together into one map, and identify the gaps (opportunities) and patterns.
  • Provide and record recommendations (Report PPT/Reach, Blog...) that would be improvements to the current experience so we can shape the new docs accordingly.
  • Extra to this (28 Nov) - conduct a test to observe if new astro is an improvement against the existing docs for the same tasks
  • Tidy and rationalise the the content, removing any older 'template' content, for clear presentation.
@RobTobias123 RobTobias123 mentioned this issue Nov 9, 2023
26 tasks
@RobTobias123 RobTobias123 self-assigned this Nov 9, 2023
@RobTobias123 RobTobias123 added the Type: Documentation Improvements or additions to documentation label Nov 9, 2023
@RobTobias123 RobTobias123 added this to the Internal tools milestone Nov 9, 2023
@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 13, 2023

Potential questions for 1:1 user interviews relating to current docs:

What is your current workflow and when do you use Nucleus docs?

  • when designing a journey
  • prototyping
  • dev…

Where would you find a snippet of code?

How do you know if you’re using the right component for the purpose you intend?

How would you find out about the different configurations available in the component?

What do you find difficult about the docs?

How do you think we could improve it for you and your ways of working?

What currently works well for you in the docs?

…(what else?)

———

Says example:
Where do I start?
I want to copy the snippet to paste into Playground.

Does example:
Checks the docs.
Opens up Storybook.

Thinks:
Am I using this the correct way?
Why are the docs so long to read?
I like the look of this so I’ll use it.

Feels:
There’s too many different places to refer to before using a component/
Slows the process down
Stifles my creativity

@RobTobias123
Copy link
Contributor Author

I have created a set of survey questions in UserZoom for user interviews. There are a few different user types from designers to Devs to QA etc. The 1st 2 questions are screeners. The rest relevant to understanding usage and behaviour of current docs.

Please takes a look at the test questionnaire and let me know if you want to edit/add/delete anything from it before I invite the users (internal) to answer...

https://preview.userzoom.com/mpaap/MTAgQzE0MTZTMTEzNyAg/1

@RobTobias123
Copy link
Contributor Author

Identified some potential participants across different user types:

Designers x 7
Devs x 4
CMS Managers x 3
QA - Alex B is coming back to me with names.

(It might be worth ensuring equal numbers from each pot to avoid bias.)

@RobTobias123 RobTobias123 added Type: Bug Something isn't working question Further information is requested and removed Type: Bug Something isn't working labels Nov 14, 2023
@andij
Copy link
Contributor

andij commented Nov 17, 2023

I'd like to suggest some changes to the first page of the survey if I may:

Here is the current

Hello 👋

We would like you to have a look at how users of the Nucleus Design System can find information regarding the documentation site (often referred to as Nucleus docs).

We intend to move the docs to a better platform and with your participation, we can uncover any problems with the current design and implementation, and discover areas for improvement.

There are just 10 questions in this survey. Please answer them based on your experience and workflow in mind. This is an internal survey and the results will not be shared outside the business. They are purely for us to help ascertain what we might improve for you, the user.

You might find it useful to open the current docs as a visual reminder whilst answering the questions:
https://nucleus.design/docs/getting-started/introduction

Keep in mind – we are not testing your knowledge or judging your process, we just want to understand how you currently use things and any pain points you might have.

Here is my suggestion

Hello 👋

We are on a mission to better understand how you, a user of the Nucleus Design System find relevant information, and how the Nucleus Docs fits in to your workflow.

We are in the process of moving the Nucleus Docs to a new platform and with the participation of our users, together we will uncover problems and pain-points with the current implementation with a view to improve it for everyone.

There are 10 questions in this survey. Please answer them based on your experience and workflow.

This is an internal survey, the results will not be shared outside the business. They are purely for us to help ascertain what we can improve for you, and other users like you.

Please open the Nucleus Design homepage in a separate browser window as it'll help you to answer the questions: https://nucleus.design/

Keep in mind – we are not testing your knowledge or judging your process, we are focussed on understanding how you use the Nucleus Docs to help us identify any pain points and positive experiences you have.

@andij
Copy link
Contributor

andij commented Nov 17, 2023

I'm interested in the types of people that use Nucleus. Can we include a question which could help us identify their personality and behavior. For example:

People are diverse, do any of the following resonate with you?

  • Hands-On Person

    • Enjoys learning through direct experience and practical, hands-on activities.
    • Prefers trial-and-error methods and learns best by doing.
  • Visual Learner

    • Thrives on visual aids such as charts, graphs, and diagrams.
    • Finds it easier to understand information when it's presented visually.
  • Auditory Learner

    • Prefers verbal explanations and benefits from listening to information.
    • Enjoys discussions, lectures, and spoken instructions.
  • Critical Thinker

    • Analyses information carefully and questions assumptions.
    • Prefers to understand the underlying principles and reasons behind concepts.
  • Intuitive Thinker

    • Relies on intuition and gut feelings when making decisions.
    • Prefers to grasp the essence of a concept rather than focusing on detailed analysis.
  • Collaborative Worker

    • Enjoys working in a team and values collective input.
    • Finds synergy and creativity through collaboration.
  • Independent Worker

    • Prefers working alone and is self-motivated.
    • Enjoys autonomy and may be more productive when working independently.
  • Goal-Oriented Person

    • Focuses on outcomes and results.
    • Enjoys setting clear goals and working systematically toward achieving them.
  • I'd rather not say

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 17, 2023

Empathy map template for collecting info from individual participants. We'll synthesise the data collected from all participants into one map to gain insights and identify patterns and pain-points etc. (Used this approach instead of Miro as perhaps a little more private choosing who to share the file with and can be anonymised after collecting if necessary).

https://www.figma.com/file/5o1qKSZcKyk1xDAxlsPItx/Empathy-Map-template?type=design&node-id=0%3A1&mode=design&t=iTXyp1rqwgOanfiE-1

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 17, 2023

Thanks for the additions earlier @andij - I have updated the survey to include these. They're possibly a little lengthy for the formatting available, maybe we could provide the examples verbally as they are filling out the form so it's visually simpler? Could be fine though.

@RobTobias123
Copy link
Contributor Author

I've added 3 basic tasks on the empathy map for interviewees to follow to add some context and observe them using the docs. I've also added an example to each quadrant to help stimulate responses.

@RobTobias123
Copy link
Contributor Author

User Zoom Survey is now live and invites have been sent initially to 10 users/participants spread over Design, Engineering, and CMS (more can follow once we have names for QA).

Hi XXX, We are on a mission to better understand how you, a user of the Nucleus Design System find relevant information, and how the Nucleus Docs fit into your workflow.

I would like to invite you to a user interview where there is a short User Zoom survey to answer at https://s.userzoom.com/m/MSBDMTQxNlMxMTM3 and additionally, a few basic tasks we’d also like to observe separately in person and take some notes on (empathy mapping).

Please let me know if you would be interested (or not) in taking part and I will add some time to our diaries. It should only take a few minutes of your time.

I look forward to hearing from you.

Thanks.

@RobTobias123
Copy link
Contributor Author

So far we have had some good responses and there are 1:1 sessions booked with participants for tomorrow afternoon and Friday. We have had one person decline to take part. I have invited a few more to complete just the survey part.

@RobTobias123 RobTobias123 removed the question Further information is requested label Nov 20, 2023
@RobTobias123
Copy link
Contributor Author

I have set up an empathy map template now on Miro for sharing with the the users scheduled in for tomorrow. These basic tasks should let us observe how users actually interact with the Nucleus Docs - letting them talk out loud and then capturing what they say/do/think/feel and any particular pain-points or positives. Synthesising the results we collect from this exercise and the survey should give us some insights about how we can improve the user experience and apply to the new docs.

https://miro.com/app/board/uXjVNMuUH1Q=/?share_link_id=411988086464

@RobTobias123
Copy link
Contributor Author

Following the dry run with Drew and Mekala, I've added one more task to see if the user can choose a variant of a component (without leading them) and refined the script I am going to use for this afternoon's sessions:

Image

@RobTobias123
Copy link
Contributor Author

3 x UXR moderated sessions have been conducted today. 2 more are scheduled for Friday. Any further sessions will most likely be early next week. Aiming between 5-10 participants across different segments.

@RobTobias123
Copy link
Contributor Author

8 participants have responded to the survey so far. 2 names for QA segment have been put forward and have been invited too. (Represents a wider selection of personas).

@RobTobias123
Copy link
Contributor Author

2 more moderated sessions have been conducted today.

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 27, 2023

Collected all the data as of today 27 November, shared PPT report and insights with Nucleus Team in Nucleus Core Teams channel.

https://centricaplc.sharepoint.com/:p:/s/NucleusDesignSystem-Core/ESESZU8_3aNDlfDuNQ0bJucBP6EmY8DOgMHET3bFwcQcGA?e=E84pcg

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 28, 2023

In addition to this research (following discussion with Mekala and Drew), we intend to run an A/B type test, to test the agreed 'Windsor' structure for Astro vs. the current docs structure (same task) to ascertain if it is an improvement on the existing and if it needs adjustment. I will look into defining and setting up the test today.
(see #245 )

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 28, 2023

Thinking further about the additional test, maybe a tree-test or card-sort would be a better more measurable method than the A/B to test findability and navigation hierarchy...

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 29, 2023

Having discussed, we will proceed with an unmoderated Advanced test, setting several common tasks for the user to perform the same both on the existing docs and the new Astro docs to observe if we have made improvements. The tasks may be randomised and success criteria set and automated. The sessions will be recorded both screen and microphone in order to capture think out loud comments. Currently working together with Mekala and Dreww to write a set of tasks and will be adding these to the study builder in userzoom. Following that we will run a pilot test before inviting participants.

...

As a reminder, here are the component pages that I'm working on which can be used in our test:
ns-accordion - standard
ns-alert - simple but diverse
ns-card - staple
ns-inputter - complex
ns-landmark - image heavy
ns-tab - simple
ns-timeline - diverse
If there are any other pages or areas that we'd like to test, let's add them. Let's say we want to add the Page types because they may be where someone wants to go to complete their task.

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Nov 29, 2023

Compiled the UserZoom test. POC and Refined with Mekala and Drew. The following items were addressed:

  • - Omitted Success Question 2 from all tasks
  • - Defined the success URL as the RHS menu link
  • - Added instructions to task bar text to click the Success button when they think they've found it (covers users who successfully scroll or search etc to the correct place instead of using the RHS menu)
  • - Update Task names to be suffixed with either 'Existing' or 'Astro 01' - the latter referring to the Astro test version number.
  • - Each iteration of the Astro docs will be numbered and have its own URL so that we might run another similar test again between say v.01 and v.02

Noted but still to do:

  • Update intro text regarding extensions and permissions expectations
  • Advise on the use of Chrome browser if Firefox remains problematic with UserZoom
  • Set expectations of time it will take for a) setup b) to complete
  • Advise that the same Task will be asked twice in the test - once for each version of the docs, so we can assess differences in behaviour
  • Check if 'Contribution' Task is valid
  • Maybe add Task on ns-inputter (complex component, slots, variants)
  • Populate the content for the component docs being tested in Astro 01
  • When inviting participants be sure to include a message saying that they may contact us during the test (during working ours) should they have any technical difficulties with installing extensions etc.
  • Include a note that mentions to the user that each task will time-out after X minutes to set expectations

@RobTobias123
Copy link
Contributor Author

The following tasks have been defined for the latest test:

  1. You want to add the ns-alert component to a prototype. Find the HTML markup for ns-alert that you would copy into a Nucleus Playground page.
  2. How might you identify which components ns-landmark can be placed into?
  3. Can you find the list of coloured backgrounds available in ns-landmark and identify the name of the attribute?
  4. You want to use the ns-accordion for a set of FAQs. Where might you find out if it’s okay to use it for this purpose?
  5. You have an idea for a new component that will add value and could be reused by other teams. Find documentation that explains how you might contribute this to the Nucleus Design System.

Given that each task will be performed twice (once on existing, once on Astro 01) that equates to 10 tasks with a follow up Success NPS question for each on how easy the task was to complete. That's 20 items for the user to go through so should not be any larger.

Scoring the NPS of success will help us compare how much more easily they completed the same task on a different platform. (Hypothesis being that Astro 01 is an improvement).

Also taken into consideration are non-success questions for error, abandon and time-out so we can capture those reasons if that situation occurs.

@andij
Copy link
Contributor

andij commented Nov 30, 2023

I'd like to include a note that mentions to the user that each task will time-out after X minutes, so they are not surprised if/when it does time-out. And it may help provide confidence in the expected duration of the test.

@RobTobias123
Copy link
Contributor Author

I've updated the study intro text now reads:

"We’re improving the Nucleus Design System and would like your input.

We’re moving Nucleus Docs to a new platform and need your help identifying issues. During the test, we’ll record your browser tasks and audio, asking you to use both the current and new Docs versions. The test takes about 25-30 minutes, with each task taking approximately 2 minutes.

Please note that all tasks in the browser will be recorded during the test. This also includes recording sound via your microphone as we encourage you to ’think out loud’ as you perform the tasks. You may be prompted to install the UserZoom browser extension to enable this.

Your feedback is valuable for enhancing the user experience. Thanks for participating!"

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Dec 4, 2023

I've included a message on each task setting expectations as to when it times out.

@RobTobias123
Copy link
Contributor Author

All success type follow-up questions have now been converted to 5 point rating scale from difficult to easy.

@RobTobias123
Copy link
Contributor Author

Following catch-up this afternoon, all intercept delays (of success message at validation URLs) have been set to instant.

@RobTobias123
Copy link
Contributor Author

Updated Follow up Timeout questions to "*1. Why do you feel there may not have been enough time to complete the task?" for all tasks.

@RobTobias123
Copy link
Contributor Author

All task wording and formats updated and consistent. Validation URLs checked and updated.

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Dec 4, 2023

Test Tasks now read as follows;

*1. Please indicate which option best describes your role.

Product / UX design
Engineering / developer
QA
CMS Management
Other (please specify)

2-3. You want to add the ns-alert component to a prototype. Demonstrate how you would find the HTML markup. (SNippet)

4-5. Find the placeholder image for the ns-card component. (image link)

6-7. Find the list of coloured backgrounds available in ns-landmark and identify the name of the attribute. (Spec - visual)

8-9. You want to use the ns-accordion as a set of FAQs (Frequently Asked Questions). Demonstrate how you might find out the maximum number of FAQs you can add. (Best practice)

10-11. Demonstrate how you would find out if you can have multiple validation messages on an ns-inputter radio button. (spec - dry)

12-13. Show us how you can edit a page (other - contribution and RHS)

@RobTobias123
Copy link
Contributor Author

Updated study following conversations. Sent pilot test link to Mekala and Drew.

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Dec 5, 2023

Updated study tasks to randomise. Need to test this.

Image

@RobTobias123
Copy link
Contributor Author

Added a couple of validation URLs to the Best practice Tasks 8 & 9 as discovered during test the same answer can be found in 2 locations.

@andij
Copy link
Contributor

andij commented Dec 5, 2023

Suggestion for the opening page of the survey:


Thank you for helping us improve the Nucleus Docs 😀

We are moving the Nucleus Docs to a new platform and we really appreciate your help by taking this survey. 💜

  • During the test, your browser window and audio are recorded.
  • The test will take about 25-30 minutes.
  • Each task is time-limited to 2 minutes.
  • You'll be asked the same question twice in a random order, once for each version of the Nucleus Docs.
  • Please think out loud. (There is a video for you to watch shortly that will help with this.)

If you are prompted to install the UserZoom browser extension, you will need to restart the survey from the beginning.
Either by clicking the back button in your browser, or clicking the survey link again.

We really value your feedback.

Thank you for participating!

@RobTobias123
Copy link
Contributor Author

RobTobias123 commented Dec 5, 2023

End message I reworked to:

Thank you. You have now completed the study. Your feedback will help us enhance the user experience of the next-generation Nucleus Docs.

If you have any questions or comments regarding Nucleus Docs or the test you’ve just completed, please drop us a line on the Nucleus Teams channels.
Thanks again for participating!

@andij
Copy link
Contributor

andij commented Dec 5, 2023

Draft email to Team Leaders prior to sending out the survey invitation:


Hi,

Nucleus are currently undertaking user testing on our Docs: https://nucleus.design/docs/

Following our recent moderated user testing, we are about to conduct a round of unmoderated user testing. We will soon be sending an email invitation to the survey.

Could you please forward the invite when you receive it on to as many people in your team as possible encouraging them to take part, especially anyone new in your team, and those that are not so familiar with Nucleus.

Everyone is invited to take part (It’d be great if you’d take part too).

Thanks very much!

@RobTobias123
Copy link
Contributor Author

Updated all the success questions with a confirmation of completion "Thank you. How easy did you find it to complete this task?" (rating). This negates the need for the intercept message, making it more efficient to complete the test.

@RobTobias123
Copy link
Contributor Author

Updated all non-success timeout questions to read: "Sorry, your time is up. Why do you feel there may not have been enough time to complete the task?" affirming that the task has ended due to the allotted time expiring. Again, this negates the need for the intercept message, making it more efficient to complete the test.

@RobTobias123
Copy link
Contributor Author

Reformatted task typography to emphasise the task more.

Image

@RobTobias123
Copy link
Contributor Author

DRaft for comms:

Hi,

We’re improving the Nucleus Docs and conducting some user testing around this to help shape things – we’d really appreciate your help if you could spare some time by taking this usability test…

https://s.userzoom.com/m/MSBDMTQxNlMxMTg2

We’re looking to get a good cross-section of users from different areas of the business. Please drop us a line on the Nucleus General Teams channel if you run into any difficulties with the test.

Feel free to do the test in your own time, the test will be open until the end of Tuesday 12th December. The test will take about 25-30 minutes.

Don't miss out on your opportunity to help us enhance your user experience of the next-generation Nucleus Docs.

Thank you for participating!

@RobTobias123
Copy link
Contributor Author

  • Precursor comms sent to leads by email to encourage them to pass on to their team members when they receive the link.
  • Draft in comment above added to Nucleus General Teams channel.
  • Follow-up ink email sent to Product design and research leads in the hope it will be forwarded on.

@RobTobias123
Copy link
Contributor Author

Sent reminder emails and in General channel to complete the docs test

@RobTobias123
Copy link
Contributor Author

Finding from latest research videos:

User's are missing information that is in the RHS columns of a table if their viewport is small as the table is not responding to the smaller widths - possibly due to too much content/columns.

Suggest revisiting centre width specification for Astro and assessing each table in the content to see if it can be optimised/simplified to work better.

@RobTobias123
Copy link
Contributor Author

User research unmoderated study has now been closed. We had 11 out of 15 participants completing all tasks. The results data and videos will now be analysed and insights reported.

@RobTobias123
Copy link
Contributor Author

On Friday 22nd converged with Drew and Mekala so discuss key findings noted on the shared Miro board: https://teams.microsoft.com/l/entity/8216e453-3db5-48ee-a3d6-5122f505c8a3/_djb2_msteams_prefix_1441442193?context=%7B%22subEntityId%22%3Anull%2C%22chatId%22%3A%2219%3A7510e676ebde4fd3b4a2ac968c61d3df%40thread.v2%22%2C%22contextType%22%3A%22chat%22%7D&tenantId=a603898f-7de2-45ba-b67d-d35fb519b2cf&allowXTenantAccess=false

These points are to be refined and used to consider potential improvements in the next iterations of the Docs designs. (Some may be feasible/in scope now, others could be added to a backlog).

@RobTobias123
Copy link
Contributor Author

Tidy and rationalise the the content, removing any older 'template' content, and adding significant statistics for a clear and full presentation. (Updated the shared PPT file on Astro Docs Teams chat)

@RobTobias123 RobTobias123 added the Status: Completed Nothing further to be done with this issue. Awaiting to be closed label Jan 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Completed Nothing further to be done with this issue. Awaiting to be closed Type: Documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants