Yesterday saw me back in our Campfire chatroom, this time with insights and innovation expert Steve Portigal.
We were talking about a subject that I found very interesting – using insights to inform design decisions and innovation. We spend a lot of time talking about how and why to conduct user research, but very little time talking about the specifics of applying the things that we learn from that research.
Steve is Principal of portigal.com – a consultancy which specialises in helping companies discover and act on new insights about their customers and themselves. He is also the author of Interviewing Users: How to Uncover Compelling Insights.
It was a busy session, but Steve absolutely nailed it. A lot of handy resources came out of it, and I’ve listed them below so that you don’t have to sift through the transcript to find them.
- When to Use Which User-Experience Research Methods Christian Rohrer
- Championing Contextual Research in your Organization Steve Portigal
- Well, We’ve Done All This Research, Now What? Steve Portigal
- Fons Trompenaars’ Global Cultures Model
- Steve’s obligatory Star Trek reference Kobayashi Maru
- Dollars to Donuts Steve’s podcast (with transcripts)
If you didn’t make the session because you didn’t know about it, make sure you join our community to get updates of upcoming sessions. If you’re interested in seeing what we discussed, or you want to revisit your own questions, here is a full transcript of the chat.
|HAWK||OK, so I’m going to start things by introducing today’s UXpert – Steve Portigal. Steve runs a consultancy that helps companies discover and act on new insights about their customers and themselves. He is also the author of Interviewing Users: How to Uncover Compelling Insights http://rosenfeldmedia.com/books/interviewing-us…|
So Steve, today’s topic is a really interesting one. We talk a lot about how to conduct research, but very little about how to actually use the results successfully
I’m going to throw it open to you to give us an intro to the subject
and then we’ll take questions
Thanks for the intro and thanks for joining me here, everyone. It’s fun to get to talk to people like this and I look forward to your questions and thoughts!
It’s a good way to frame things, I think. Research as a data gathering methodology is often talked about.
But what to do with research data (e.g,. how to make sense of it, how to find the nuggets) is a essential.
And then what do you DO with what you’ve come away with. Okay you have some new perspective, but how do you impact your product, or generate new products, or help colleagues to get on board or take action. Etc.
Welcome if you’ve just joined us. Steve is introducing the subject and then we’ll start with questions
This is really where the real value of research is. Yeah, it’s excellent to get out of the building and you will be changed in big and small ways by doing that, but if you don’t take those next steps, then you’ve not got all you could.
Which isn’t to say it’s easy. It’s weird and ugly and messy and creative and joyful but anyway, that’s the fun stuff.
Last thought before we open it up; for years I’ve personally resisted being called a “researcher” – because the work I like to do is really about those other steps – figgerin’ it out and figuring out how to engage people in taking action. I am STILL trying to figure it out, but anyway researcher always felt like a limiting title for me personally.
But that’s my thoughts on that.
Thanks for the intro Steve. So who has a burning question to kick off the session?
Do people have questions that they’ve come in with that we can use to start off the dialogue?
Steve, I’m assuming A/B testing is a big part of gathering data to inform design, could you walk us through the way you go about conducting this kind of test and how to read the data afterwards?
An indirect non answer, Shawn but…
check out Christian Rohrer’s excellent definitive “when to use which research method”
He puts A/B testing in one quadrant – behavioral, quantitative. That’s not as much part of my personal toolkit. I’m much more into qualitative and a mix of behavioral and attitudinal (to use his terms which I wouldn’t unless I’m referencing his diagram).
great – that’s definitely helpful. Our company has recently invested in using AB testing and nice to know there are lots of other avenues to dive into.
So we can talk about A/B testing but you aren’t going to get Steve Portigal’s Best Practices For It in this conversation :)
I know people – maybe those in the room – can weigh in on this more than me; I am sort of a bewildered observer to much of that. I think it’s fascinating and I wonder sometimes is it the worst thing or the best thing or both.
My personal opinion is that testing without theory is sort of this scary big data algorithmic end of the world thing.
Let me try to clarify/qualify.
I’ve done some A/B testing, but I can never figure out *why* anything happened :) Drives me mad :)
Hi Steve, I’m pretty new to UX but less so to research – I was wondering if you face challenges with getting people to take action on ‘softer’ data, as opposed to hard stats taken from large numbers of participants?
I saw a talk about 2 years ago and it was from a consultancy that walked through a variety of case studies of how to engage people in I think an ecommerce site. They kept showing us examples
We put this box over here in red, what do you think happened? This, no THAT?!
And so it went on and on and it was so vexing to see these different examples where what we’d assume or hope was not what happened. And as Donna says, there was never any reason – in this talk at least – as to why.
When doing user interviews with different stakeholders and trying to create an innovative product from that information, what’s a good method for narrowing down what parts you should act and move forward on? (other than looking for the patterns)
The takeaway seems to be that all you have to do – and the ONLY thing you can do is test.
It reminds me of the heyday of usability testing (we aren’t in that any more, I don’t think) when there was little thought on design but if you just tested everything then you would know what to do.
It sorta sets back the craft of design and not that I’m on some high-horse about design as a craft but I dunno, just leaves me feeling weird.
Not quite a rant. But some musing.
Shawn – I’ll leave it at that but I don’t know I’ve offered much except Christian’s diagram and his framework for thinking about what you COULD do – also further in that article he talks about different types of design problems – where you are in the process – and how to bring in different methods to address ’em. Good stuff.
Hollie asks the great questions about how to get people to take action on softer data.
yup – thanks Steve!
Right, qualitative research is smaller numbers and the kinds of things we learn are well described by “softer”
I have a few thoughts on that! (you should not be surprised).
I had a surprising little conflict on Twitter with someone yesterday (not surprising that it happens on Twitter I guess but it was unexpected) when someone was asking about persuading stakeholders to DO ux research (which I would say is a good analogue to your question).
I pointed him to some parts of my book on this (chapter 9 I think) and http://www.slideshare.net/steveportigal/champio… this talk where I spoke with people who had been successful in DOING research and getting it INTO the organization.
And he kinda fired back – well what about the ROI? You can’t just entertain stakeholders with stories, you need to be “persuasional” – which was my first time hearing that word but it is a word so I learned something.
Anyway he was pretty worked up that I didn’t have the information in the format he wanted it. ROI! It has to be ROI!
And so you see this struggle. I’ve seen qualitative researchers use the semiotics of quant research in how they present stuff.
70% of respondents said …..
when in fact it’s 7 out of 10.
That’s kind of sneaky.
And so yeah, if you can keep someone from freaking out and then more able to listen, sure.
Hello, I live in a country where users are more nice than a normal user in other places and are not comfortable saying “bad things” about anything. What advise would you would give to break, overcome, conquer this barrier?
But really, I think the goal here would be to reframe the question. If you start trying to PROVE that qualitative research is TRUE the way quant research is true, you have lost.
You have to reframe the questions. Or as Steve Baty once said to my great enjoyment – reject the premise of the question itself.
So this guy was looking for ROI but there isn’t any data like that. It’s about something else.
I love that!
One person I interviewed for that presentation said to me something about a series of questions like “Have you been successful with your design?” “Do you know why not?” “Do you know what data you need?” “have you been able to get it?” etc. it’s really looking at the question and not trying to “defend” qualitative research.
Not to harp on my twitter friend (blocked, thank you very much) but he really was dismissive of TELLING STORIES when in fact if you read about changing culture and influence, stories are the most powerful tool we have. That’s how we learn and think different and that’s what these softer insights lead to.
Maybe two final points on this.
One – methodologies can be friends. Find some softer truths and confirm how widespread they are with qualitative data.
Two – I have no idea what I was going to say but I’ve burbled plenty here I think? Followups welcome and I’ll move onto the next question in the meantime!
Hi Steve. I am new to the UX role as a recent graduate of Tradecraft from the UX track (I believe you may know Kate and Laura and even came in to talk once) and so really appreciate you taking the time to share your knowledge. The product I am currently working on has 2 sides to it. One, a saas product for college access counselor’s and the second a loan fund for students financial unmet need for tuition. While the Saas product helps power the fund side with crucial data, from a user perspective neither touch each other and each have different needs. I am wondering if you have any advice on how to reconcile the two when conducting user interviews only with the Saas users. I realize this is a bit messy of a question so please let me know if I can provide any clarity.
Thanks Steve, much appreciated. A great way of looking at it I think.
Kim asks about how to narrow down and move forward from research.
http://www.slideshare.net/steveportigal/well-we… is a slide deck from a workshop about making sense of user research data and it walks through how to take what you heard in the field and come up with insights.
Short version here is that most people just “debrief” – say what did you hear, well it was this and this and that and kind of write it down and organize it. But what many teams fail to do is go back to the data – that means recordings and transcripts, not your notes which are already skewed – and see what people REALLY said and that always gives you something richer.
So doing that, and then whiteboards, post it notes, finding patterns, reorganizing, articulating a point of view.
The output from that process is not – in this form I’m describing – what you should DO but instead a statement of an opportunity
eg, help people maintain their social connections after they switch their physical office.
that opportunity is kind of a need.
take your opportunity statements and preface them all with
How Might We help people maintain their social connections after they switch their physical office?????
That’s a brainstorming question. So do your brainstorming stuff (generate many ideas, work quickly, say yes, defer evaluation).
Then proritize and filter – and a great way to filter is to identify criteria and have a small group rank and rate the ideas this way.
e.g., for each idea ask yourselves
How much does this fit with our strategy
How realistic/feasible is it
How quickly can we get it to market
How proprietary is it?
Put ’em in a spreadsheet and do 0 for nothing and 2 for out of the park and 1 for so-so. No more fine grained that than. Rate ’em all, and tally and you have a prioritized list of ideas that come from insights that come from data.
Okay I’ll leave that one for now.
Rebeca asks about how to deal with nice people who don’t say bad things, how to get honest feedback and input.
Well, congrats on living in a country where people are nice!
I’m guessing in your country you also know how to hear the cues that people provide that are harder for a foreigner to read.
I remember my first couple of visits to Japan – despite a lot of prep- learning how to understand when were being told no even though that word wasn’t being used.
So I suspect you have some of that.
I’m trying to remember the framework I saw a few years ago – it might have been fons trompenaars (that’s a name btw) or maybe someone else – that created a map culturally of a bunch of interpersonal factors globally. It was like a myers-briggs for countries.
I think you definitive have variations of this and you can see it more or less in lots of places.
I am thinking about a kind of question you avoid asking even in “bad things saying” places, where you kind of put the person on the spot.
“Do you think Twitter is a waste of time?” besides being a leading question it also kind of boxes the person into answering – kind of reactive or defensive.
And so saying “What do you think about Twitter?” isn’t going to work in a nice country.
But one thing to do say is “We’ve seen there are a couple of types of people out there in terms of their attitudes towards Twitter. Some people think it’s really great. Others think it’s a waste of time.”
You could even say very little after that. You’re presenting a framework. It’s not YOUR opinion. It’s not THEIR opinion. It’s kind of on the virtual whiteboard between you.
What do they think of that framework? that’s what you want them to comment on.
You would say the least amount of words after introducing the idea to get them to say a bit about what they think.
That’s my take on that kind of challenge for now!
reading Laci’s question
Wow! That’s very helpful! Thanks Steve! And yes I am very lucky :)
Laci white: white you need to add more detail, I think. What are you struggling with in the interviews?
Steve, you are nailing this. Awesome insights. :)
It’s not clear if you are not getting access to users of one side of the design and that’s challenging you?
I’m struggling with trying to design a product for two different user. One that actually uses the product itself and the other that wants specific user data out of it. Much of which require the access counselors do not currently track.
That’s the queued questions done. Who has something else they’d like answered?
If no one puts up their hand, I’d love to hear “Steve Portigal’s Best Practices” as mentioned at the start of the session. ;)
Ah I found that model I was thinking of for global cultures
The counselors want a crm and have their specific needs and wants out of it which doesn’t always align with the data points we need to extract for the fund side.
What are the most suprising results you have had from user research as an example you could tell us?
I’m wondering if this is a case where the needs of the users aren’t aligned and so how do you design a solution when doing so may impact one group well and another group poorly?
I think in that case there’s a few things.
One is your priority.
I wouldn’t mind tacking onto Laci’s question — how do we set up research that tells us if we should build one interface or two different ones?
Two is is there a third way (can I get all mystical new agey) – is there a way that the experience(s) can indeed provide alignment in the way they are delivered. This is a fantastic design challenge. Thinking about the design opportunity as one to find your way around it instead of through it can be fun.
Obligatory Star Trek reference
(if that’s new to you please read and then stun people when you geekily drop it into a brainstorm or strategy discussion soon)
Three is about managing expectations. How do you roll out changes – how you help people plan and prepare for changes and understand why those changes are happening – that’s a whole strategy and can at least blunt if not reframe how people think about what is being rolled out or provided to them.
there is interesting work on cognitive biases that get into this. I think there’s a duration bias – something that takes longer to make can be seen as having more quality, transparency bias is when you can see it being made and you feel that it has more quality to it, etc.
So applying that persuasive mentality to giving people something than what they might have expected can be helpful when you are concerned it isn’t the “best” thing for them.
Wayne had a follow on here. Research to tell us if we should build one interface or two.
As usual I’ll answer a slightly different question (damn that Portigal). I feel like so many times we start a study with some different set of users – it could be situational (people that got a large tax refund last year, people that got a small one) or demographic or geographic but it starts with the hypothesis – from our client that these are DIFFERENT USERS.
But as often happens when we look at something, we find that there is maybe more in common than there is different.
Not always, not always, of course.
But just that research lets you resegment among more meaningful categories.
I was just recalling a study we did on people who run small businesses but they are global small businesses. And we were looking for new ideas for products and services – financial services stuff – for them.
We created a really really broad sample among many different criteria because the client was really cool in acknowledging we didn’t know what we didn’t know about what made them different in a germane way.
Steve, there’s a lot of work out there on conducting research and working with results, but very little on writing and choosing the right questions to include in your interviews. Any insight on this process? I’ve always got so many questions, how do I decide what to focus on? How do I make sure I’m not unduly influencing the responses?
We found that there were people in a small business who had that business because they loved the thing (say Mexican handicrafts) and people who had that business because they loved business (deals, selling) regardless of content. And it was a continuum, with people moving alongside them.
Once you saw that continuum you could see that solving for one group would require a different set of How Might We questions than the other group.
Sometimes you end up with the same solution when you use those different starting points. But at least you wanted to start with those different questions.
Hope that helped you Wayne; I took a slightly different take on it.
Okay maybe I can hit one maybe two more here.
I think it’s just Derek.
Correct, and then it’ll be time to wrap up.
Those is in Interviewing users but also in a standalone blog post that I’m waiting to load
Seventeen types of interviewing questions.
Chapter 6 of Interviewing Users http://rosenfeldmedia.com/books/interviewing-us… talks a lot about how to ask questions.
and this webcast talks through some of it as well
I think that’s it for me for now?
I want to plug one thing relevant to our topic though
go for it!
A podcast I’ve been hosting called Dollars to Donuts
or with transcripts
it’s interviews with people who lead user research inside organizations. I’ve been so pleased with the response and hope you guys will like it.
I’ve been meaning to check it out all week
Thanks. Only in chapter 5 so far, so glad it’s coming up!
I’m in pre-pre-production on the second series (trying to line up sponsors so send anyone my way!!!)
So I want to say a huge thanks to Steve for his time today. This has been an action packed session.