UXR Portfolio Eric Smith UX Researcher (contract) October 2020 Table of Contents Intro (5 min) Case Study 1 (15 min) Case Study 2 (15 min) Conclusion (5 min) Questions (5 min) 01 02 03 04 05 Intro About Me I am a contract, qualitative UX Researcher currently focused on developer experience and cloud observability technology. I got into UX about 2.5 years ago by training new hires to recruit participants for Google product research. I was a 2nd grade SPED teacher before training/research. After a UX coordinator role at Indeed , I am now back at Google and conducting research on GCP products July 2012 Pflugerville 2nd Grade Teacher Elder Scrolls New Hire Trainer Jan 2013 Aug 2014 Hoot Call Sales Trainer Learning & Development Facilitator | Regional Trainer April 2016 April 2018 UX Research Coordinator Trainer (Vendor) Dec 2018 UX Research Coordinator Oct 2019 DevEx | Ops Mgmt UXRA (Contract) Present My Journey to Research How do I work? I’ve conducted research for all stages of the product development process: from open explorations in the discover phase to validation oriented research in the deliver phase Discover Design Deliver ● Online surveys ● Customer experience map ● Sketches ● Wireframes ● Functioning prototypes ● Design mocks What methods do I use? I’ve utilized a broad toolkit of UX research methods. I select the appropriate method based on stakeholder input and the research questions we are trying to answer. Digital Research Tools I am also well versed in using digital tools for UX Research Some of my favorites include: 1. Mural.co - for remote collaboration, workshops and journey maps 2. UserTesting.com - for remote usability testing and international recruitment 3. GoToMeeting - to give participants control of my machine for remote usability testing 4. Qualtrics - Survey creation, online data collection Case Study 1: Advanced Logging Analytics Background Cloud Logging (FKA Stackdriver) allows users to store, search, analyze, monitor and alert on log data and events from GCP and AWS. It ingests application and system log data, and allows for real-time analysis of those logs Problem Statement Many customers send their logs data to other platforms for analysis. Our team believed this was due to certain ‘ advanced analytics needs ’ that couldn’t be met with Cloud Logging. Present Findings Design workshops with Stakeholders ● Explore the problem as a team ● Make progress towards designing end feature ● Outline possible research questions Determine how to answer research questions ● Review questions from workshop ● Confirm methodologies Construct Research Plan ● Research goals ● Research questions ● Methodology ● Participants ● Timeline ● Get sign off from Stakeholders ● Craft discussion guide(s) Recruitment ● Create Screener ● Work with Central Recruiting ● Leverage Customer Panel list for additional participants Run Sessions ● Iterate methodology/strategy if need be ● Discuss sessions with stakeholders ● Write debriefs for each sessions to share w/stakeholders ● (submit all videos for transcription) Conduct Analysis ● Content Analysis ● Use case bucketing Process Study Goals ● Understand what advanced logging analytics needs our customers have ● Identify how concept mocks (resulting from a design workshop) meet or do not meet those needs ● Validate our hypothesized user flow of when and how analytics needs are a part of the troubleshooting workflow among GCP operators Team My Role Co-Lead UXR and project manager for this research effort: 1. participant in design workshop 2. co-lead research kickoff 3. authored research plan and participant screener 4. co-coordinated/scheduled participants 5. co-moderated cognitive walkthroughs 6. co-conducted analysis/qual coding of session transcripts 7. co-presented final findings/insights/themes to stakeholder team ● 1 other Lead UXR ● 2 Engineers ● Product Manager ● Product Designer Research Setup 7 external participants, with job titles ranging from Cloud Architect to Data Scientist In order to ensure we recruited a representative sample, all participants had to meet our baseline Cloud Logging screening criteria — this way we knew our participants fit our general enterprise customer persona. Additionally, all participants were screened for logging use so that we could ensure they were performing logging analytics-related tasks ( using logs to troubleshoot issues in production ) — this way we knew our participants could speak in depth about their log analytics needs and provide relevant feedback on the design mock based on those use cases. Baseline Cloud Logging Screening Criteria + Logging Analytics use: ● Enterprise company ○ 500 employees or greater ● GCP customer ● Perform Operator-related tasks frequently ○ Monitoring / managing the performance of an application running in the cloud (e.g., latency, CPU usage, load balancing, etc.) ○ Primary responder to all alerts (system and app-related) ○ Identifying root causes of issues ○ Reactively referencing logs ● Perform logging analytics-related tasks frequently ● Use one or more of the following tools for log analytic needs: ○ Splunk, Kibana, BigQuery, Datadog Logging, Tableau, Azure Log Analytics /Kusto, Data Studio, Loggly, SumoLogic, Google Cloud Logging (Stackdriver) Study Goal: Understand what advanced logging analytics needs our customers have. We needed to understand customer advanced logging analytic needs in order to: ● Provide more advanced analytics capabilities users need to keep their logs in Cloud Logging. Methodology ● One-on-one structured interview: understand role, responsibilities, frequency of logs troubleshooting, and tools used for logging analytic needs. ● One question survey: a one question survey was given to participants at the end of the study to surface the most important logging analytic capabilities for users Study Goal: Identify how concept mocks meet or do not meet log analytic needs. We needed to assess participant perceptions of an initial mock in order to: ● Make improvements to the designs and/or alter the direction of the initial designs if need be. Methodology ● Cognitive Walkthrough of concept mocks: obtain user feedback on the flow of the concept mocks ○ Contextualize walkthrough: based on participant’s own use cases ○ Low fidelity: keep participant focused on mock flow and not on data or UI elements. What did we do with the data? Step 1 Note taking during sessions Step 2 Stakeholder debrief and clustering Step 3 Transcript memoing/coding and insights Analysis Analysis Data: ● Session notes ○ All notes taken by stakeholders and (2) Co-Lead UXRs put into a spreadsheet that is organized by the structure of the session discussion guide. ● Transcripts ● Video recordings Synthesis Process : We organized all of the data into buckets by sections of the design mock and frequency of comments/suggestions from participants.