Ben Weber is the President and Co-founder of Humanyze. Ben is particularly passionate about the power of behavioral data and analytics and its ability to improve organizations and how people work in general. He has been featured in Wired, CNN, and The New York Times, among other outlets, and his work was selected for Harvard Business Review's List of Breakthrough Ideas and Technology Review's Top 10 Emerging Technologies. In this episode, Ben talks about collaboration data, AI, and expertise.
[0:00 -5:32] Introduction
[5:33 -15:37] What is collaboration data and why does it matter?
[15:38 -29:38] Adaptive statistics (i.e., AI) and its role in data analysis
[29:39 -39:55] How does expertise impact people analytics inside of an organization?
[39:56 -42:47] Final Thoughts & Closing
Connect with Ben:
Connect with Dwight:
Connect with David:
Podcast Manager, Karissa Harris:
Production by Affogato Media
Announcer: 0:02
Here's an experiment for you. Take passionate experts in human resource technology. Invite cross industry experts from inside and outside HR. Mix in what's happening in people analytics today. Give them the technology to connect, hit record for their discussions into a beaker. Mix thoroughly. And voila, you get the HR Data Labs podcast, where we explore the impact of data and analytics to your business. We may get passionate and even irreverent, that count on each episode challenging and enhancing your understanding of the way people data can be used to solve real world problems. Now, here's your host, David Turetsky.
David Turetsky: 0:46
Hello, and welcome to the HR Data Labs podcast. I'm your host, David Turetsky. Like always, we find fun, fascinating people to talk to you about the world of HR data, analytics, and technology. And also, like always, we have with us our friend and co host, Dwight Brown from Salary.com. Hey, Dwight, how are you?
Dwight Brown: 1:05
Hey, David. I'm good. How you doing?
David Turetsky: 1:07
Good. We have a special guest today. Ben Waber, who's the co-founder and president of Humanyze, Ben. Hi, welcome in how are you?
Ben Waber: 1:15
Hey, doing well! Thanks for having me. I'm doing doing pretty well. Just enjoying the week!
David Turetsky: 1:22
And hopefully you haven't been broiling under the temperature that we have been finding ourselves in lately.
Ben Waber: 1:27
Luckily, now it's nicer in Boston. So I'm planning to get outside. But uh, yeah, it's it's been a hot couple of weeks. So it's, you know, when it gets down to like, 90, you're like, wow, that's really cool. So I
Dwight Brown: 1:38
Got a breakout the jacket, right?
Ben Waber: 1:40
Yeah.
David Turetsky: 1:40
Yeah, well, yeah, I actually ran this morning in what was it? Lounge pants. So yeah, it's actually been cooler!
Dwight Brown: 1:48
Lounge pants? Valore lounge pants.
David Turetsky: 1:53
No, I didn't say valore. I didn't say but we're not going back to the 70s. So Ben, tell us a little bit about Humanyze and how you got to where you are today?
Ben Waber: 2:02
Sure. So Humanyze is a workplace analytics company, and really spun off the PhD research that my co founders and I were doing back at MIT. And essentially, what the company does is use data that companies already have, you know, think email, chat, meeting data, but also sensor data about the real world to really look at collaboration, and how patterns of collaboration evolve and change over time, and how that relates to outcomes we care about. You know, retention, performance, what have you. And we don't do things at the individual level, everything's aggregated up. And of course, we can talk about, you know, that whole approach and everything. But yeah, I mean, that's what we do. And again, it gets back to when I was first doing, take this kind of data from, you know, companies for my PhD research, and really just being shocked that these large successful companies, I always assumed that when they made big people decisions, you know, to reorg, build the new headquarters. So of course, they must use of lot of data, and run tests, and then based on that make decisions, and as quickly disabused that notion. And so it felt to me like this was an important thing to start working on. And that's how I got into it.
David Turetsky: 3:04
Fascinating. Well, so one fun thing that you may not know about Ben is?
Ben Waber: 3:12
Oh man, there's lots of fun things that I could maybe tell about myself. But one thing that I will talk about this time is that or at least a little bit. So to inform my diet, I read meta analyses of nutritional research. And so we can talk about that I enjoy reading scientific papers, but you should read meta analyses, because single studies can be misleading. But meta analyses are pretty good.
David Turetsky: 3:34
For those people who don't know what a meta analysis is.
Ben Waber: 3:37
So a meta analysis is essentially when researchers look over a large body of research and then try to statistically summarize results about certain outcomes. So for example, you can have a single study that says, eating almonds every day is healthy. But there can be many, many factors that influence whether or not that is a real thing. Part of it is that people only publish results in general that have significant outcomes, which causes problems. But then also, the question is, again, how, how big are those effects? And so looking over many, many studies, then you get a much better sense of like, Is this a real effect and how strong it is. And I just find that helpful. And, you know, it doesn't inform 100% of my diet, but there's a decent percentage of my diet is like, okay, these things are pretty good for a variety of outcomes that I care
David Turetsky: 4:23
So you're a geek like me, I actually love to use about. things or tools to inform my decision making around food. I used to use my fitness pal, and I put everything I ate into that and see what it did to my protein content, my sodium content, and how it fit into my average calorie intake per day. So I used to make that well, conscious decision to do all that work. Of course, I've forgotten all that, especially when I've whipped cream pie for dessert. So there you go.
Dwight Brown: 4:53
Hey, dairy.
David Turetsky: 4:55
Dairy, coming from that state that you're in? Absolutely. Just so everybody knows. He's in Minnesota, they're not necessarily the dairy state, are you?
Dwight Brown: 5:03
No, no, Wisconsin, I think is the official dairy state. But we're close.
David Turetsky: 5:08
you're very close geographically. So today, our topic is going to be one for the geeks in all of us, which is we're going to be talking about collaboration data, AI and expertise, and how there's a lot of promise in all of that, and the pitfalls that are associated with that. So my first question for you, Ben is then what is collaboration data? And how can it change how organizations manage themselves?
Ben Waber: 5:41
When we think about collaboration data, I think I mentioned a little bit earlier, what sort of data sources I like to look at when trying to understand how people work. So you can think about email, chat, calendars, zoom, all the systems that we use to work, they generate so much information, sort of digital exhaust, about how we're collaborating about how work happens, you know, who communicates with who, when, how often. And if you think about accumulating that data over time, it essentially gives you this digital X ray of the organization, what is a social network? How does it evolve? How do we spend our time. And again, to the extent that even in physical workplaces, there are increasingly sensors that over time can allow you to estimate even how much people interact face to face, those network patterns. And this is what we first started seeing in our research. I mean, there's there's decades, obviously, of research, using surveys to look at collaboration. But with this sort of more quantitative data, you just get, you know, orders of magnitude more in terms of scales, and in terms of the different ways you can analyze that, to really understand phenomena that relate to these outcomes of care about. And so you can just think of it, it doesn't just have to be systems like this, we could use you to use JIRA, if you're a software developer to look at tasks, right, that's who's working on the same task. And these, these things are just orders of magnitude more predictive of pretty much name, any outcome you can think of, then really, the vast majority of other data that people have looked at in the past. And so, you know, you can just think of, you know, almost all of us already have access to this kind of data. And of course, the question is, there's so many different ways you can look at it, which ways to look at it are, you know, more predictive in general, you know, which things lead you down really problematic rabbit holes? And that's obviously a lot of what we've we've looked at.
David Turetsky: 7:31
And so this is beyond what used to be considered ONA right? This is like, ONA on steroids?
Ben Waber: 7:37
Yeah, I mean, I say network analysis is obviously a big part of this, right. But there is a history of that in the social sciences, again, that goes back to the 50s around using surveys for you to say like, who do you talk to right, then that that is useful, right? There are certain things you can get from that, that you can't get from this collaboration data? Like who do you trust, for example, actually can't really get that from this quantitative data. You know, on the other hand, people are terrible. If I say, you know, who did you talk to yesterday, like, people were only about 30% accurate on that, right. And we now have, again, all this data on how work happens. And so certainly, you can look at those network effects. You can also look at how people spend time, in even at Humanyze, we now have the largest multi-platform dataset of workplace interaction in the world. And so that means you can start looking at not just what happens within a single company, but how does work change in general. And that's where you start getting network effects, even beyond single companies, it start to get very fascinating,
David Turetsky: 8:26
I imagine there's a lot that would need to be done to be complete about the picture, to take into consideration things like cell phone data, as well as you know, Slack and discord for kids. I don't know how many companies are actually using discord. But I know kids talk mainly through discord these days. And so is it. Are we trying to be complete? Are we trying to be good enough? You know, what's, what is good enough mean, when you're getting to this kind of analysis? Because you could be losing a lot of flavor for what, what might be available data but a little bit harder to get access to?
Ben Waber: 9:02
Yeah, I mean, it's an important point. So first of all, the data is never going to be complete. Never. And I think people have to accept that. And people also have to it can't pretend that it is complete, because it will never be complete. To your point, let's talk about cell phone data is a great example. Sometimes we get access to that, right? Sometimes people use company related cell phones, and we can look at that data. And again, we could talk about how we process it. We don't collect names. It's all sort of synonymous. So it's important issues, and we're happy to talk about it. But even when you get that you still don't get personal cell phone data. You'll never, you'll never get that if I meet you for coffee after work. I'm never gonna get that. And the way to think about this is I do think this idea of good enough is important, right? So first of all, it's what do you have? What can you start with and then trying to at least subjectively get a sense either through surveys or through interviews, where are your holes and you try to always get better, but you're never done is the thing right? And this will always change. Maybe today, you wave a magic wand and I get Slack and I get 100% in there. I mean you won't Actually, but pretend you do, right? You know, in three months is going to be different, right? And so you have to constantly look at this, right? Because it really speaks to not just the accuracy or quality of the analysis you can do. But also it speaks to what biases are going to be in there. Right? And, yeah, I mean, if you can imagine, for example, you know, if older employees are more likely to use personal cell phones than younger employees, then your metrics are going to be systematically biased against those folks, and you wouldn't know it. Right? You would never know it.
David Turetsky: 10:31
Well, those those people are also going to probably get together after work, right? They're probably going to go out for a drink or coffee or whatever. And those offline conversations, unless they're in your calendar, which usually aren't, we're gonna go down the hot dog down the street for for a pint at the pub, you know, unless those things are actually in something. There's no way to capture them and the bias that can be interpreted from them, where we didn't invite XYZ to the to this thing, their exclusion is actually important, right?
Ben Waber: 11:02
Yes, absolutely. No, I mean, this is something where I lived in Japan for a while, and we have a bunch of Japanese customers. And, you know, obviously, in Japan, even more than, you know, in the US or in Europe, it's very common thing to go out for, you know, for dinner or for drinks with your coworkers. But who does that really matters, and obviously has a lot of, you know, career relevant outcomes, you know, things like that. And so you just, you have to be aware that these things are happening, right? And to what extent are they happening? And again, you're never going to get 100% predictive power on anything, you know, that being said, with what is easily available, you can do extremely well, right. And it's just but always acknowledge that you have those gaps. And I think one of the tensions is, first of all, I think, to your question being satisfied that okay, we have enough to do some valid analysis. But also acknowledging to, you know, employees, folks in leadership when you're presenting these analytics, that they're not complete, but doing that in a way that doesn't totally undermine, like the fact that this is so much better than anything anyone has ever used before. Right? And so the only difference is that those, you know, what's missing is very clear. Whereas in the past, when people were making people decisions, you'd have like the CEO say, Well, I know that engineering doesn't talk enough to sales. But of course, they didn't know that. Right? That was a hypothesis, they were saying, and then we're talking about right. And so I do think it's important to lead with that, that this is still much better, and much less biased in a variety of measurable ways, than just personal observation and things like that. That doesn't mean it is unbiased. But it means if I look across things, it there's there's a lot of positive attributes of this kind of data. But again, to just make sure that, ideally, these analytics are still interpreted by humans through their own contexts, because no matter no matter how much data you get, no matter how predictive algorithms are, they never actually know the context of the work that people are doing on a particular team.
David Turetsky: 12:52
So to kind of tug on one of the strings you just mentioned, you talked about, you know, having a conversation with the CEO and saying, sales and engineering don't talk, and part of the question is about how organizations manage. And so if we need more efficiency in the relationship between sales and engineering, maybe one of these threads that we can pull on is how do we make that happen more? And the fact is, is that sales managers through salesforce.com, the engineering team manages, as you said, through JIRA. So how do we create those linkages? So the things that are being captured in JIRA and the things that are being captured in Salesforce come together? That's one of those outcomes, right, where we can start to bridge those gaps and be able to help, you know, solve that communications issue.
Ben Waber: 13:37
I mean, you certainly see the effects of balkanization in systems with this kind of data, right? I remember one of our customers, you know, the engineering organization was have worked, they were heavy users of slack, but other departments didn't use it. But if that's where all the communication is happening within the teams, that means you're much more likely to have less communication between these teams. And so this is always a question when companies look at things like collaboration technologies, right. And they use more and more things that are purpose built for very specific applications. And that's not to say it's bad. But it's to say, Well, you better understand the implications of that for how people work. Because even if it makes a particular task that a team does more efficient, if it detracts from the coordination and collaboration that has to happen across teams, it may be negative, that's not guaranteed is but I'd want to know that. Right. And I think again, there's something that people don't look at, because most of the time, like, I'm gonna be working and if I'm an engineer, and I'm working on JIRA, I might feel way better using JIRA than Salesforce, right? And so I feel more productive. And if you say, Well, I'm gonna force you to use Salesforce might say, that sucks. And this I'm actually spending more time on it and I feel less productive. But you don't appreciate that actually now you communicate slightly more with this other team that vastly speeds up their work. And this is where again, these this kind of data can be very helpful because it makes those things visible. It doesn't say again, it's hard to say, for every single instance, whether this is right or wrong, again, you can correlate it. But correlations, it doesn't guarantee that it's correct, right, it still means that's what I would try to get. Again, you can't have causation without correlation to be clear, like, so it's still important to know that, but I'd still want to test these things out. And I still want to look at this over over a long period of time. But again, doing that to sort of take a lot of the emotion out of these decisions, I think is really useful.
Announcer: 15:26
Like what you hear so far? Make sure you never miss a show by clicking subscribe. This podcast is made possible by Salary.com. Now back to the show.
David Turetsky: 15:35
So maybe Ben, we might be able to use tools like artificial intelligence, and machine learning to help us inform how those situations can get solved with this. Cause, you know, AI is different than tools we've used before, right? It helps us learn about datasets in ways in which we're not necessarily tuned, our mind necessarily isn't tuned to be able to create those correlations. So is AI the answer?
Ben Waber: 16:08
I mean, I think in general, tools from AI and machine learning are helpful. But I do think we probably do ourselves and the technology a disservice by even referring it to names like AI, like, I hate that name, personally, like, I like to call it active statistics, because that's what it is. Like, just to be clear, like, these things are correlation engines. That's all they are. Right. And what is interesting about them is, again, they can ingest, you know, when we'll make some more productive is lots of data. Right? So it's interesting, when you look at a lot of these models, why suddenly have things been able to get a lot more predictive? Or can we do, you know, image generation with these algorithms much better than we could in the past? It's not because the algorithms themselves are different, like they're not, they're the same things they were in the 80s. The difference is that now we can literally mine every single piece of text on the internet. And look at, you know, when you say, x, what happens in an image, and it just, that's all it does, like, it's very dumb from that perspective, right. But again, to your point, it can find these relationships that it would take humans way too long to look at. That doesn't mean it's real, though, right? So if I create an algorithm, I say, I want to, I want to train an algorithm to recognize pictures of dogs and cats. That's all I wanted to do. And so let's say I feed in tons of pictures of dogs on the internet. But it turns out that the background in every single image of a dog is blue, and a feed, you know, millions of images of cat in the background in every image of a cat is white. And then I feed in an image of a dog with a white background, it's gonna say it's a cat. Because it's not learning, it's looking. There's no learning actually happening. It's just correlation. What's the simplest thing? It's that, right? Yeah, so it's useful for generating hypotheses, right? Like it is, it's very useful. It's very useful for showing these relationships at large scales. And a lot of the things again, we couldn't do those in the past. And what's also nice is that it changes over time, right? We get new data, these things can change. And that's great, right? But it also means that if there's these, you know, these spurious correlations with things that don't matter, these biases that creep in, you will not, you have to be very attentive to them. And so I do think there's a lot of potential in using these algorithms in the people space in general. But I do think a lot of folks are like, oh, well, AI is gonna solve all our problems. And it's like, no, it just updates these things more regularly, like, it's fine. It's useful, but it depends on what you're trying to do.
Dwight Brown: 18:19
And so often, we have these unrealistic expectations, as you're just pointing out, now we've got these expectations that somehow, you know, people think that that we're waving a magic wand and spreading pixie dust on something that we're going to have this huge, momentous revelation that comes from the the AI and one of the pieces that you touched on before, I think really gets to that point. And that is that as you're running some of these analyses, it really becomes about visibility. And ultimately, at least to me, the best use of AI is to use it to generate more pointed and other questions that you then go forward to try to, to answer.
Ben Waber: 19:06
Yes. And I think that we also have to, especially when it comes to within organizations, right? Like we're not dealing with, you know, testing, what links do people click on, on a website to buy stuff, right? So you can't, we can't generate tests with the same frequency that you see online. And so that means that certain, like, people have to take action from this kind of data. And so you might have an algorithm that is more predictive, but that you you don't actually, you're not able to expose, you know, how much do different factors influence that prediction. Right. That's way less useful within something that is marginally less predictive, but that you can expose those correlations because then people could say, a person would see that and say, Oh, well, you know, for this group, the the thing that most drove this suggestion is that there is people have a lot of focus time. You know, focus work time, but actually like, this group shouldn't be doing that. So this is irrelevant. And like, an algorhythm never going to know that. Right? But like, you would know that and say, okay, like, I still appreciate that, you know, flagging, then for a lot of cases it will be correct. But that, that kind of exposure, I think sometimes people again, get enamored with, oh, like, there's some tool that gives me such high predictive power, but like sometimes a slightly simpler tool, right, would actually be much more useful, much more effective and much more valid for larger groups.
David Turetsky: 20:29
But that's why when we start talking about these technologies, and we use them in the context of people analytics, one of the things drives me crazy is that there's not enough of an in, usually, to be able to get reliable predictions on what it is that they're trying to solve for. Unless you're talking about a Walmart or you're talking about an entire industry. And you have so much great data about the SOPs that a job is supposed to be utilizing. So the context for a job, the skills needed in a job, the skills of the individual employees in the job, you have to have the complete model to me, you have to have at least a more complete model to be able to make not just better predictions, but ask better questions and get better answers. And the problem is, is that and we've said this 1000 times on HR Data Labs, so I apologize to the people have heard me say this too much. But HR data sucks. And so the the fact that the data sucks, means that the models that are generating the questions and the answers that come out of it, have to be able to create assumptions that say the data sucks. And so here's what we can use out of it to be able to do what? Predict something or be able to give us back some intelligence? You know, even even using AI or sorry, sorry, the algorithms to talk about to be able to talk about how much should a job get paid. We put so much emphasis on the job title. But the job titles are all wrong. And the job titles are crap. So we're measuring an incomplete field, and trying to predict a very important why that is, in many cases going to have such huge error in it. And sorry for the diatribe. All I'm trying to say is that the application of these algorithms to Google Analytics, to me seems flawed by its very nature.
Ben Waber: 22:26
I think that there's a couple of really key points here. So one is when you first decide like, say, for example, I want to I want to figure out what salary I should give person, I want to I want to train an algorithm that's going to tell me for a current job candidate, how much should we pay them. What you first have to appreciate is how much bias is already in that setup some of that, like what are you optimizing for? Because there's many things that could I could say I want to optimize for, you know how much money the company is paying, I can say I want to optimize for retention, I could say I want to optimize for a whole bunch of things, and maybe a combination of those things. But ultimately, like a person is choosing that, like if you are the people analytics person who's like you chose this thing, there's nothing right or wrong about that. Right? But that you have to realize that you've already set those parameters, right. And so that's the first thing that there's nothing magic about that, right, you can optimize the hell out of whatever, whatever you want. And that's just your choice. And so there's nothing. But this is also important, because then again, not only when a number spit out, even if it was like super accurate with regard to what I'm optimizing for. Like there were all these assumptions that happen there. And the concern that I have is that if you are one of the you know, you're a manager, you're the person who's going to make the offer, and you don't see that all you see is you got a program on your computer, and it spits out a number, you don't see the issues with the data. Right? That actually you don't maybe you don't even have enough data for this. You know, there was like five of these job postings in the last year. So like any prediction is just total garbage. It's not even useful, like at all right? Well, you don't see that typically. And I think that this is for either vendors like like myself, or like providing systems that do that commercially, but also for internal folks are doing that. I think it's an inherent, it's incumbent upon us to make those things more visible, or to add friction in ways that make it such that I think, in particular, like generating single predictions is huge. I mean, there's lots of work in HCI human computer interaction on how to do this in a but I think that a lot of the folks in analytics, haven't looked at that. But it's so important, because where do you create frictions to make this still useful, but to show that kind of inherent opacity, in, you know, in outcomes, I think is so, so important. And that, again, if you start to see error bars that go from zero to $500,000, you just like that point, you don't even show it, they you know, we have no idea. And that's like literally for one of my customers, right like there was they cut off and this is it's a different context, but I think it's relevant. So we're not analyzing the entirety of fortune 100 company, and we're not analyzing data from the entire company, we're analyzing data from, you know, a subsection of their employees and still 1000s of people to like a large group, but it's a subset. And because of that, and because of the way that they cut off, like, you know, access to data for other parts of the company, there's certain metrics that like, are just super inaccurate, and we know it, like we know it. And so we just don't show them. We could like, we calculate them. But we know that these are likely highly inaccurate. And that's a choice you can make. Right? Like, you don't have to there's nothing, you know, no one's legally saying you have to, you know, you don't have to show this or you should. But I think that not just practitioners. But I think also ideally, you know, leaders within companies, you know, have to become at least sophisticated enough to know, like, I want to know what I shouldn't be looking at, right? Because it's not accurate enough.
David Turetsky: 25:48
Well, the world of pay has always been opaque. It's always been opaque. We've held our tools, whether it's surveys, whether it's using point factor job evaluation, to determine pay or level, we've always held those things secretively until now. Because now the law says that you need to be more transparent about pay, you need to tell people what starting rates are. And if you start to talk about it, you can't stop. You can't put a wall around that. Because people are gonna say, Well, how did you get to that.? And so what we've been talking to clients about, especially during a lot of the consulting we've done recently, is open it up, make the box go away. Tell them how you achieved what you're dealing with here. You're using art as well as science to be able to develop these things. Because if you try and just say it's purely science, it's horseshit! By the way, that's a technical term for anybody who thinks this should be explicit. That's a technical term! Cause horseshit has actually developed from eating roughage and turning it into poop.
Dwight Brown: 26:55
Thank you David.
Ben Waber: 27:02
Like why do certain people get paid a certain amount of money? There's not a good reason for like, again, there are reasons for it, right? Well, we develop there will be like, Oh, you're, you're worth it. Like you don't know these things. These are guesses, right? And I think, to your point, being transparent about it is so like, having the algorithms now like, written out, so now it's not just in my head, right? Like, I have to like write rules down, it makes it more transparent. It also means you can't get away with some of the crap that you got away with before, which is just like, like, we're buddies. I'm gonna pay you more. Like no, you can't. It would be explicit if you did that, right? Like you're paying me $10,000 over what, you know what it says. And again, that still doesn't mean it's not biased, like it is. And I think one of the one of the worries that I have is, even if in aggregate, you can prove it's less biased than a single individual, which I believe is largely true for a lot of related questions. Right? Well, again, it's debatable, but for things like I've seen this for automated hiring, things like that, yeah, right. Like it is debatable but it's okay. Even if you even if you can say it's less biased, on average, what you're essentially doing is saying I'm taking a single kind of bias. And I'm just scaling that up to everybody. Right? And so, again, I think automated hiring, it's even more clear than in salary. Right? So in the past, and there's great work by Katie Creel, who's going to be at Northeastern next year who's done work on this, but the, the idea around, okay, like, today, I could apply for a job, and maybe there's some algorithm that goes over it, but like, I don't get it, right. But like, that shouldn't directly impact me applying for a job with the same title at another company, right? Like that other companies should have, you know, different biases. And like, I'm still probably less likely to get it than some other person, but like, I should be able to have a shot. But if everybody uses the exact same algorithm, I have no shot, like, never. And there's some bias in there. Right? Like, it could be right, of course. But it also like, if there's some bias that like, there's something in, you know, my resume that just for whatever reason, probably doesn't need like, it's totally random. It cuts me out. That's gonna be locked in. And so there are ways and again, there's been great research recently in the ethics community on how to deal with this. These are the sorts of things were as these tools become more widely used, like, we have to think about, like, how do we design things and thes processes, right, because they will create those problems.
David Turetsky: 29:39
So let's go to the third question, because I think this one's going to be even more fun, which is around expertise and how management today is usually done by leveraging and I hate this. I hate the term best practices. I use it all the time and in terms of consulting and it's just not true. We don't use best practice. We use the most applicable practices, but I'm not going to get there yet. And how does this deal with or how does this impact people analytics inside of organizations?
Ben Waber: 30:07
I think when it comes to work in general, everybody works. And so everyone thinks that they're an expert on work and management. And that creates problems. Because the vast majority of people don't understand what actually leads to the outcomes that we care about, right? I care about high performing organizations. I care about, you know, low attrition, let's say, what does that? What most people do now? Is again the best practices. But what does best practices mean? Best Practices mean, I see some successful company and I copy what they do. Right? That's what it is, or that we've been successful for a long time. So we're gonna keep doing what we've been doing. Is that actually what led to success? Like unclear? Right now, again, you have like Netflix, for example, I'm going to call them out, right? People said, Well, they've this culture deck that they posted online, and their culture, that must be why they're so successful. Well, it turns out their culture is crap. Right? It turns out, they were successful, because our product was so good, right? That they could have, they had many terrible management practices. But it basically didn't matter over a period of time, until it did matter, until suddenly it does matter. And it was funny, because I was talking to one of our customers a few weeks ago, and for some of their upper management, like their, their work life, you know, related metrics are terrible, like their upper management working, like, really long hours they're working on weekends. And there's like, you know, this is a problem for variety of reasons, this is going to, you know, tend to lead to higher attrition, it'll tend to lead, you know, you'll also worry about not just burnout, but also that spreading to other parts of the organization. And then again, accelerating those processes, there as well. And they say, well, listen, you know, we're, we're one of the most successful organizations in the world, we've been doing this, you know, for for decades. And so clearly, and in lots of other companies work, like people work really hard, like long hours, and like they're really successful, so that this must be right. And we're just the exception. And so you think about this, alright. Imagine you are, let's say, a 200 year old company, right? You're one of the, let's say, one of the big banks, they're a great example, right? One of the big banks have been around for hundreds of years. And let's say they had a tradition that every December 31, they lit them million dollars on fire, just lit it on fire. Did it every single year. So if you look over like 200 years, they would have lost two hundred million dollars doing that. But like, yeah, these are like, that's nothing like you know, the profit of these companies from a single day.
David Turetsky: 32:32
By the way they do. It's called bonuses.
Ben Waber: 32:34
Okay, we're going to get lots of people angry here. I'm not gonna say that. But
David Turetsky: 32:42
I used to work for an investment bank or investment banks doing comp. So yeah, they used to light that money on fire.
Ben Waber: 32:47
Like where does it go? Does that. So clearly,
David Turetsky: 32:50
Or Christmas parties.
Ben Waber: 32:50
that is provably a terrible management idea. Like burning a million dollars every year is terrible. But it wouldn't matter. It wouldn't matter. Right. And so, again, when people people lack the ability to do, most people, lack the ability to discern who knows what they're talking about, because all we do is say, like that person's, or that that company is successful, we're just going to do copy what they do. And what the data enables you to do is actually expose these relationships. Right? And not everyone believes it, because most people are still steeped in, you know, this way of thinking. But I think this starts this process of admitting, hey, you know what, like, we're successful. Some of it was probably because we made some of the right management decisions. But some of it was also because we got lucky. And if you take that approach, and you take that, that frame of mind forward, it means that you never just drink your own Kool Aid and think you're like God's gift to the world, you're like, Well, I'm going to reevaluate, and I'm going to reassess the things that I do continuously, right? I'm gonna say, like, does that actually work? Because I don't know. Right? And even if I then get the data that validates it every so often, I'm gonna check that again. Because, like, I'm probably right, but I could have gotten lucky again. Right? And I think that just, it's just much more honest. It's just a much more honest way of managing and then I can even communicate to employees, right, not pretend, oh, I know, this new process or this new pay structure is the best. Like, we don't know that it's a hypothesis. Here's why. Right? And reasonable people, you're never gonna make 100% of people happy. Right? But reasonable people see that. Okay, like, I get it. And then you move forward. And that's I think that's where this tension has come really in corporate America today, especially. Where you have executives saying, Well, you know, we need to be in the office five days a week, because we know that makes us more productive. And employees are like, I don't believe you. Whereas in the past, they did believe them.
David Turetsky: 34:36
Right. Yeah. But in the past, they didn't have the power to say, I don't believe you. In the past. They said yes. Yes, sir. Or ma'am. And then just did it. Whereas now there's been a shift in thinking. And now people can actually use their voices to say something. Now, I want to just challenge one thing. I don't think that what you're saying is wrong in any way. But I think it's always been part of the scientific method to, to do something and then measure it to see if it was it was effective. And what, and I'll take a little bit different tack on it may attack our institutions, MBAs who basically said study how successful companies have done what they do, and you'll be successful. And I think that's horseshit again. Yes, a technical version of that. Because if it works for you, that's great. But do it, measure it, see if it's successful, don't just buy it hook line, and sinker, say the GE way is the way we should be doing it. Because GE has always been successful. No, you don't. I mean, Tesla had a good product and made it successful. I don't think anybody would necessarily say, just because of Elon Musk, it was because they engineered a brilliant product.
Ben Waber: 35:50
Some people would say that some people would say that. So what I will say in the defense of like, you know, business schools, like MBAs of the past is that I will say that a couple decades ago, especially it was like, you couldn't measure this stuff fast enough for it to make a difference, like you couldn't. So like, if you do a reorg, right, to change how people work. Right? Without, you know, the kind of data that like I've been been looking at, like, how would you do that? Well, I'd have to ask you on surveys, basically. Right. But like, it takes a long time for these sort of things. There's, again, there's other things I could look at. But you don't have nearly the frequency. There's other I mean, again, I'm simplifying it, but in terms of the speed of data collection, was such that you couldn't get enough observations with enough accuracy, or like to figure that out in a time where I could test this in like a six month period. Right? I would, I would argue that would be my argument.
Dwight Brown: 36:46
Right. I mean, look at it this way, the it gets to what we've been talking about, too, and that is that the value is in the context around things. And you know, to say that doing things, the GE way is going to make you successful? Well, you got to look at the context around it, maybe it will, maybe it won't, maybe some parts, well, maybe maybe they won't. And I you know, that's something that I'm really hoping for from AI, especially in the people space is that we can start to generate algorithms that can pull in that context, yes. And can better inform will this work for you? Or is there you know, is there a good chance this will work for you?
Ben Waber: 37:27
It takes a lot of data to do that, though. I mean, at least in my work, I've been very cautious like it took us it wasn't until like literally a little bit over a year ago that we feel felt confident releasing benchmarks on certain things, because I just wanted to make sure that we have enough data but I can't even do it. Like just be clear. Like I don't even have enough data on single industries, to show things like I'm not confident I have data from industries, but I'm not like, I'm not confident enough yet. I also like, what's the impact of if you do some pretty good training program? What is the impact of that on how people work? Like, you need many repetitions! And David, I think you sort of brought this up as well, right? Something like you want, I will say like a general rule of thumb that I tend to like is I want to see like, a couple of dozen repetitions of something before I feel pretty good about providing this as an effect. The reason is that like, if it's a low effect size, if I need a million issues, I don't care, like I just don't care. It's like the effect size is too small, who cares. But if it's like a couple dozen, and you see an effect, you're like, alright, well, you're not gonna see an effect unless it's it's probably pretty strong. And you still want to keep validating, but you feel pretty good about it. But so you need that, right? Like, I need a couple dozen. And so like, that's what I'm looking for. But I think like that's, again, a similar way, if you say, Well, how should I pay, you know, a certain class of worker? And what does that do for retention? It's like, Alright, I gotta see that for that job title a couple dozen times, right, until you see that, you got to be careful. Gotta be really careful.
David Turetsky: 38:50
But the world isn't perfect. There's economic business cycles. So yes, even if you've had two organizations, who are literally exactly the same in every single way, their carbon copies of themselves, but one does it to this year, and one did it in 2020. They're gonna have a different effect because of the business cycle when it happened. So there's a seasonality to this, that, and the topic that we're talking about here is expertise, right? And if there's a person who's trying to interpret and apply this context that Dwight was talking about, which I appreciate, to the moment to the industry, to the type of job to the problem that's being asked. It's that person's expertise that gives them the ability to apply the learnings from that that you have created and the benchmarks that you may have been unwilling to part with but could learn from that. It's that expertise that gives them the context! So Ben, I think we can all agree we had a spirited conversation around this topic where there's a ton of great data around collaboration, we still have to figure out how it fits in terms of the way algorithms are used and people analytics. And I think one of the ways in which we figured it out was by using expertise and people who understand how to apply that. What else did you want to talk about? Or is there anything else you wanted to add? Or Or was that, was that it? Do we do we solve the World Peace right there?
Ben Waber: 40:24
Well, I think we made a good cut. I do think that what is important moving forward is not just for people to be aware of these issues. But I think that organizations, I think governments, I think regulation is going to be very important here. But I also think that given the power of these analytics, and given the influence they have that, you know, things like ethics committees, with external members, you know, that publish things publicly should be there. I also think that professional ethics needs to be a much bigger part of the people space in general, that when someone does something wrong, right, and violates a lot of these things, then they shouldn't work in this area again, right. And this is like you see this in medicine, right? Where like, if someone uses CRISPR on human embryos, they never published again, you can't do research anymore. And I think there's similar things here, where if you build some algorithm that does some horrible things, you know, and violates a lot of these things we're talking about, right? It just, you know, blindly fires people like super basic bias metrics, right? There's no human like, you shouldn't be you shouldn't be in this field. Right. And I think that we haven't done a good enough job yet of really defining what those boundaries are. But I think it's super critical right now. Because this field is starting to grow very, very quickly, and us setting those guardrails now, that things will still happen, and those should be regulated like, Absolutely, but that I think we hopefully, you know, minimize a lot of those harms moving forward.
David Turetsky: 41:51
I think the the body of work will mature to the extent at which those things should naturally happen. And hopefully they do, and the community recognize that. Ben, it's been a pleasure having you. Thank you very much for being here.
Ben Waber: 42:05
Thank you so much for having me.
David Turetsky: 42:07
We'll have to have you back for another conversation.
Ben Waber: 42:09
It'll be great.
David Turetsky: 42:10
Dwight, thank you very much.
Dwight Brown: 42:12
Yeah. Thanks for being here with us, Ben. This has been a great conversation.
David Turetsky: 42:16
And thank you all for listening. Take care and stay safe.
Announcer: 42:20
That was the HR Data Labs podcast. If you liked the episode, please subscribe. And if you know anyone that might like to hear it, please send it their way. Thank you for joining us this week, and stay tuned for our next episode. Stay safe
In this show we cover topics on Analytics, HR Processes, and Rewards with a focus on getting answers that organizations need by demystifying People Analytics.