Tim Tan Huynh [UX]

Views: Working from home

  1. Overview
  2. Process
    1. Exploration
    2. Definition
    3. Development
    4. Preparation
    5. Gathering
    6. Analysis
    7. Communication
  3. Deliverables
  4. Takeaways

Overview

Five colleagues and I studied the views of people who worked from home because of the pandemic. The study was for a course in research design. We used a survey and interviews to gather opinions of productivity, work environments, and employer support. The ultimate goal was to understand common problems and propose general solutions. We produced a journal-style report.

Roles

  • Project co-manager
  • Participant co-recruiter
  • Research co-planner
  • Co-writer

Skills:

  • Interviewing
  • Planning
  • Report writing
  • Theme analysis

Process

Exploration

Outline ‘What?’ and ‘Why?’ as well as ‘Who’ and ‘How’

Our initial topics were privacy (work-life balance) and productivity in WFH environments. The population were persons who worked from home full-time because of mandated quarantine. Otherwise, they worked in an office. We generated research questions that were tentative. Each one was general yet answerable with more specific questions. The ultimate goal was to understand this population’s problems and to propose solutions.

Get feedback from experts: self-interrogate assumptions and select observable constructs

We developed a simple outline that the instructor later evaluated. We got useful feedback that noted flawed assumptions. First, we assumed that employees had total responsibility for work-life balance and productivity. We also assumed that they were prioritizing these two constructs. In particular, they were less likely to focus on productivity. It also would’ve been difficult to assess.

Definition

Refine research questions: make them high-level, answerable, and reflective of research goals

Ambitious, we selected three topics that related to a WFH context. They were productivity, working environments, and employer support. Based on feedback, we would measure perceptions of productivity instead of actual productivity. We proposed seven research questions in total.

  1. Productivity: How productive do office workers feel when working from home relative to the office?
  2. Productivity: What factors have affected their productivity from home, if any?
  3. Environment: How have office workers modified their physical space at home to accommodate remote working, if at all?
  4. Environment: How has bringing the office into the home affected workers’ work-life balance and daily routines, if at all?
  5. Support: In what ways have employers supported office workers in their new working environments?
  6. Support: How have employers changed work expectations to match the added difficulties faced by workers during the pandemic?
  7. Support: How do workers feel about the support that employers are providing during the pandemic to support remote working?

Do secondary research about relevant and related topics

We split into two teams of three people. One team gathered existing research from academic and general sources. They summarized their findings into a short report.

Choose types of data to collect, then outline tools to collect it

Because of course needs, we needed both quantitative and qualitative data. As such, we did a remote survey and a series of semi-structured interviews. My three-person team outlined the selection criteria, item formats, and delivery timeline. In particular, I focused on the survey. It had three sections (one for each theme) that had Likert-style, multiple-choice, and/or short-answer items.

Articulate a plan: set a schedule of milestones

For coursework, we wrote a document that summarized the team’s secondary research and collection tools. The schedule was important for setting enough time to collect, analyze, and report data.

Development

Develop tool items that answer the research questions (in incremental and collective ways)

We put the research questions in a prominent spot while we drafted the survey and interview script. This simple practice helped us to stay focused. Each item of the survey and interview was a proverbial piece of the puzzle. Of course, the ultimate puzzle were the answers to our research questions.

Review and revise items for brevity, clarity, and validity

Revising the survey and interview was a collaborative and iterative process. It involved group discussions and rounds of expert feedback. We changed the wording, ordering, and presence of items as needed.

Develop content for recruitment and consent: tailor for different channels

We divided responsibility for drafting copy that suited social media and direct email. One person developed a poster. The intent of the content was to explain the study’s purpose and to entice would-be participants.

Preparation

Prepare collection tools for pilot testing

One person created the survey as a Google Form. It included our informed-consent content. It also had logic to exclude any respondents who didn’t meet the selection criteria.

Recruit participants, including pilot testers

We knew that survey-recruiting and interview-recruiting would be separate. We emailed people whom we knew to be suitable interviewees. Our intended number was 10, but we identified two backup candidates. We also recruited two additional candidates to be pilot testers.

Do pilot testing and make revisions

I sent the survey to two colleagues who completed it by themselves. I also did the interview with one of them. Another person did the interview with a relative who also completed the survey. We summarized the pilot testers’ feedback. The entire team met to revise the survey and interview script.

Gathering

Deploy collection tools as appropriate

We posted the survey-recruiting content on Facebook, LinkedIn, Slack, and Twitter. This content included the Google Form link. For interviews, five of us each emailed two interview candidates to schedule interviews by Zoom.

Conduct interviews, then transcribe and review

We recorded the Zoom interviews with participants’ permission. The service’s automatic-transcription feature was valuable. Five team members each conducted two interviews and reviewed them. They each reviewed one other interview as well. The remaining team member reviewed five of the interviews. Thus, we balanced workload and ensured that two people reviewed each of the 10 interviews. This distribution was my idea.

Analysis

Group and organize collected data

For the interviews, we entered specific observations into a spreadsheet and grouped them into axial codes. One team member then used spreadsheet functions to generate a codebook that referenced each observation. For the survey, Google Forms generated charts and tables.

Use inductive (bottom-up) reasoning to build themes among findings

We split into three-person teams again. My team handled analysis of the survey, which had 40 valid responses. I focused on the section that involved employer support. This section was the longest. For each survey item, I examined its distribution of results. I compared them with the results of similar items and noted any patterns.

Communication

Write the report

The course required an academic-style report in a journal format. For the Results section, I described the results of the employer-support survey items. I listed quantitative observations and produced charts. A team member helped with the latter. For the Discussion section, I outlined conclusions about employer support. To develop these conclusions, I combined my survey analysis with another team member’s interviews analysis.

Deliverables

Takeaways

In hindsight, we should’ve distinguished between anticipated predictions and acceptable standards. These constructs are related, but they’re distinct and we referred to both of them as “expectations.”

Acknowledgements

  • Velian Pandeliev
  • Matt B
  • Reem A
  • Debbie C
  • Fabian F
  • Winnie L
  • Xinyi Y