UXR Portfolio Eric Smith UX Researcher (contract) October 2020 Table of Contents 01 Intro (5 min) 02 Case Study 1 (15 min) 03 Case Study 2 (15 min) 04 Conclusion (5 min) 05 Questions (5 min) Intro About Me I am a contract, qualitative UX Researcher currently focused on developer experience and cloud observability technology. I got into UX about 2.5 years ago by training new hires to recruit participants for Google product research. I was a 2nd grade SPED teacher before training/research. After a UX coordinator role at Indeed, I am now back at Google and conducting research on GCP products. My Journey to Research Pflugerville 2nd Grade Teacher Hoot Call UX Research Sales Trainer Coordinator Trainer DevEx | Ops Mgmt April UXRA (Contract) Jan (Vendor) Dec 2016 2018 2013 July Aug 2014 April Oct 2019 Present 2012 2018 Learning & Development Elder Scrolls Facilitator | Regional New Hire Trainer Trainer UX Research Coordinator How do I work? I’ve conducted research for all stages of the product development process: from open explorations in the discover phase to validation oriented research in the deliver phase. ● Online surveys ● Sketches ● Functioning prototypes ● Customer experience map ● Wireframes ● Design mocks Discover Design Deliver What methods do I use? I’ve utilized a broad toolkit of UX research methods. I select the appropriate method based on stakeholder input and the research questions we are trying to answer. Digital Research Tools I am also well versed in using digital tools for UX Research Some of my favorites include: 1. Mural.co - for remote collaboration, workshops and journey maps 2. UserTesting.com - for remote usability testing and international recruitment 3. GoToMeeting - to give participants control of my machine for remote usability testing 4. Qualtrics - Survey creation, online data collection Case Study 1: Advanced Logging Analytics Background Cloud Logging (FKA Stackdriver) allows users to store, search, analyze, monitor and alert on log data and events from GCP and AWS. It ingests application and system log data, and allows for real-time analysis of those logs Problem Statement Many customers send their logs data to other platforms for analysis. Our team believed this was due to certain ‘advanced analytics needs’ that couldn’t be met with Cloud Logging. Process Present Findings Design workshops Construct Determine how to Recruitment Run Sessions Conduct with Stakeholders Research Plan answer research ● Create Screener Analysis ● Iterate ● Explore the questions ● Research goals ● Work with Central methodology/strategy problem as a team ● Content Analysis ● Review questions ● Research questions Recruiting if need be ● Use case bucketing ● Make progress Leverage Customer ● from workshop ● Methodology ● Discuss sessions with towards designing Panel list for ● Confirm ● Participants stakeholders end feature additional methodologies ● Timeline ● Write debriefs for each ● Outline possible participants ● Get sign off from sessions to share research questions Stakeholders w/stakeholders ● Craft discussion ● (submit all videos for guide(s) transcription) Study Goals ● Understand what advanced logging analytics needs our customers have ● Identify how concept mocks (resulting from a design workshop) meet or do not meet those needs ● Validate our hypothesized user flow of when and how analytics needs are a part of the troubleshooting workflow among GCP operators Team My Role Co-Lead UXR and project manager for this research effort: ● 1 other Lead UXR 1. participant in design workshop ● 2 Engineers 2. co-lead research kickoff 3. authored research plan and ● Product Manager 4. participant screener co-coordinated/scheduled participants ● Product Designer 5. co-moderated cognitive walkthroughs 6. co-conducted analysis/qual coding of session transcripts 7. co-presented final findings/insights/themes to stakeholder team Research Setup Baseline Cloud Logging Screening Criteria + 7 external participants, with job Logging Analytics use: titles ranging from Cloud ● Enterprise company ○ 500 employees or greater Architect to Data Scientist ● GCP customer ● Perform Operator-related tasks frequently In order to ensure we recruited a ○ Monitoring / managing the performance representative sample, all participants had to of an application running in the cloud (e.g., meet our baseline Cloud Logging screening latency, CPU usage, load balancing, etc.) criteria — this way we knew our participants ○ Primary responder to all alerts (system and app-related) fit our general enterprise customer persona. ○ Identifying root causes of issues ○ Reactively referencing logs Additionally, all participants were screened for ● Perform logging analytics-related tasks logging use so that we could ensure they were frequently performing logging analytics-related tasks ● Use one or more of the following tools for (using logs to troubleshoot issues in log analytic needs: production) — this way we knew our ○ Splunk, Kibana, BigQuery, Datadog participants could speak in depth about their Logging, Tableau, Azure Log Analytics log analytics needs and provide relevant /Kusto, Data Studio, Loggly, SumoLogic, Google Cloud Logging (Stackdriver) feedback on the design mock based on those use cases. Study Goal: Understand what advanced logging analytics needs our customers have. We needed to understand customer advanced logging analytic needs in order to: ● Provide more advanced analytics capabilities users need to keep their logs in Cloud Logging. Methodology ● One-on-one structured interview: understand role, responsibilities, frequency of logs troubleshooting, and tools used for logging analytic needs. ● One question survey: a one question survey was given to participants at the end of the study to surface the most important logging analytic capabilities for users Study Goal: Identify how concept mocks meet or do not meet log analytic needs. We needed to assess participant perceptions of an initial mock in order to: ● Make improvements to the designs and/or alter the direction of the initial designs if need be. Methodology ● Cognitive Walkthrough of concept mocks: obtain user feedback on the flow of the concept mocks ○ Contextualize walkthrough: based on participant’s own use cases ○ Low fidelity: keep participant focused on mock flow and not on data or UI elements. Analysis Step 1 Note taking during sessions Step 2 Stakeholder debrief and clustering Step 3 Transcript memoing/coding and insights What did we do with the data? Analysis Data: ● Session notes ○ All notes taken by stakeholders and (2) Co-Lead UXRs put into a spreadsheet that is organized by the structure of the session discussion guide. ● Transcripts ● Video recordings Synthesis Process: We organized all of the data into buckets by sections of the design mock and frequency of comments/suggestions from participants. Results 1. Users not utilizing logging analytics during troubleshooting workflow 2. Dashboarding and charting a commonality among Operators’ troubleshooting journey 3. Logging analytics not a solo mission for Operators Why Did it Matter? Findings from the Advanced Logging Analytics research had direct impact on the MVP direction of the project, roadmap plans, and future research: ● Based on research findings, the product is focusing less on the troubleshooting journey of users. It was decided that more research with a focus on a specific querying language, based on user feedback, was needed in order to move forward with a final design. Case Study 2: Log Alerts Background Cloud Operations has an Alerting feature that gives users timely awareness to problems in their cloud applications so they’re able to resolve the problems quickly Problem Statement Cloud Operations allows users to currently alert on errors in Error Reporting and via Log Based Metrics. However, there is user demand to support alerting on logs directly from a single project. Process Present Findings Meet with Construct Determine how to Recruitment Run Sessions Conduct Stakeholders Research Plan answer research ● Create Screener Analysis ● Iterate ● Understand the questions ● Research goals ● Work with Central methodology/strategy problem(s) ● Content Analysis ● Review past research ● Research questions Recruiting if need be ● Use case bucketing ● Determine scope of Leverage Customer ● on subject/product ● Methodology ● Discuss sessions with the project Panel list for ● Select methodologies ● Participants stakeholders ● Outline research additional ● Timeline ● Write debriefs for each questions participants ● Get sign off from sessions to share Stakeholders w/stakeholders ● Craft discussion ● (submit all videos for guide(s) transcription) Study Goals ● Understand how users are currently experiencing creating/managing alert policies in GCP today ○ Surface any pain points ● Assess participant perceptions of a feature prototype ○ Uncover any additional controls needed by users Team My Role Lead UXR and project manager for this research effort: ● Two teams 1. lead research kickoff 2. authored research plan and ○ 4 Engineers participant screener 3. coordinated/scheduled participants ○ 2 Product Managers 4. moderated usability tests ○ 1 Interaction Designer 5. conducted analysis/qual coding of session transcripts 6. presented final findings/insights/themes to stakeholder team Research Setup Participants were external enterprise customers that create/manage alerting policies in their organizations. 6 external participants In addition to all participants meeting our basic Cloud Logging screening criteria, participants were also screened based upon: ● Cloud Logging, Error Reporting or Alerting customer ● Currently creates alerts on critical events ● Mix of developer and operator roles ● 1 participant with more of security-ops focus We wanted to speak with customers that create/manage alert policies because this is the scale of customers needed to support the Log Alerts product. Study Goal: Understand how users are experiencing creating/managing alert policies today in GCP We needed to understand the current user experience in order to: ● Surface current pain points with building alerts to discover any final design opportunities. Methodology ● One-on-one structured interview: understand role, responsibilities, frequency of creating/managing alert policies, and pain points/common practices. Study Goal: Assess participant perceptions of a feature prototype. We needed to assess perceptions of a feature prototype in order to: ● Receive feedback, resulting in recommendations on how the design could be improved or supported in its current state. Methodology ● Usability test: participants were given a scenario and asked to complete 2 tasks, within a prototype, as a part of the usability testing. Analysis Data: ● Session notes ○ All notes taken by myself and observers are put into a spreadsheet that is organized by the structure of the session discussion guide. ● Transcripts ● Video recordings During my data synthesis process, I organized all of my data into buckets by tasks and frequency of failures/completions as well as participant comments and recommendation. Results 1. Users want more control over current alerting experience 2. Positive reactions to prototype, requests for minor changes 3. Frequency & Notification Channels what users want most control over Why Did it Matter? Recommendations from the Log Alerts usability study were taken into a Private Preview launch: ● Product Management/Engineering felt that based on research, there should be more discoverability and clarity around certain capabilities when it comes to this new feature before it hits Private Preview. ● I am now helping to lead this product through it’s Private Preview launch ○ Writing Eval survey ○ Coordinating participants ○ Conducting user interviews once Private Preview finished Conclusion Conclusion ● My experience at Google and Indeed has served me well ○ Multiple teams — worked with many stakeholders ○ Understanding full breadth of research impact from 2 views: Research Operations and UX Researcher ○ Working with niche products and audiences ● I have learned and accomplished a lot so far ○ Working on complex products (Cloud Code|Cloud Operations) ○ In 1 year, I’ve been a part of many research studies that have had impact on Google Cloud products ○ Gained trust from Managers, other Researchers and product stakeholders ● I’m always looking for a way to grow my research skills! ○ Reading books recommended by Sr. UX Researchers/Managers ○ On a Board of Directors for a local (SEA) non-profit UX organization ○ Kevin Liang YouTube subscriber (UX Research tips to help shape skills) Study 1 (Report) Study 2 (Report) Study 3 (Report) Study 4 (Report) Topic: Cloud Code: Getting Started Topic: Customer Panel Meta Analysis Topic: Cloud Code: Quick Start Topic: Cloud Code/Run Concept Test Impact: gathered baseline usability Impact: summarization of data from 8 Impact: prioritized list, from developers, Impact: obtained user feedback on concept metrics for core functionality identified customer interviews, with a focus on of what was missing from the Cloud mocks integrating Cloud Code and Cloud Run - usability issues within Cloud Code Pantheon - top pain points discovered Code quick start guide. Tech writers recommendations taken into more Eng/Prod work made changes based on research Study 5 (Report) Study 6 (Report) Study 7 (Report) Study 8 (Report) Topic: Cloud Run/Code Integration Topic: AIOps (Stage 1) Lit Review Topic: Dev/Operator/Admin Personas Topic: AIOps (Stage 2) Investigation Impact: evaluated new AIOps concepts Impact: obtained a baseline understanding Impact: provided key themes from a lit Impact: able to align user roles by of the current Cloud Run developer review of ~30 prior works. Identified against AIOps UX Framework. Identified Anthos, Developer Tools, Operations risk, trust/understanding, value vs cost, experience, tools and resources they use as requirements for AIOps features and shed light well as the steps they take to deploy on Cloud on the delicate nature of user trust Management and CI/CD teams decision making practice and insight Run today management. Questions?