Roland Fryer
How can values create value? On this podcast, Michael Eisenberg talks with business leaders and venture capitalists to explore the values and purpose behind their businesses, the impact technology can have on humanity, and the humanity behind digitization.
Roland Fryer
Roland Fryer
Roland Fryer
On this episode of Invested, Michael hosts Roland Fryer, a Professor of Economics at Harvard University and a co-founder of Equal Opportunity Ventures, an early stage seed fund, and several start-ups.
Roland sits on several boards with an emphasis on business analytics, talent management, and product strategy. His research combines economic theory, big data, AI, and randomized social experiments to help design policies that can increase social mobility in America. Roland was awarded a MacArthur "Genius" Fellowship, the John Bates Clark Medal (given by the American
Economic Association to the best American Economist under age 40), the Presidential
Early Career Award for Scientists and Engineers, among others. He was a member of the
2009 Time 100 (most influential people in the world) and has been twice featured in
Esquire "Genius issue." Roland earned a Bachelor of the Arts degree in economics from
the University of Texas at Arlington and a Doctor of Philosophy in Economics from Penn
State University.
Please rate this episode 5 stars wherever you stream your podcasts!
[00:00:00] Michael Eisenberg:
Welcome back to another episode of Invested, where we explore the intersection of values that create value. I'm thrilled to have with me my friend Roland Fryer. I didn't know Roland actually four or five years ago. But since then, we actually became fast friends. I would often refer to him to others as a true brother.
We don't look alike. We don't come from the same background at all, but he can literally be my brother at heart. And, actually, why don't we start with that Roland? Where did we meet? You want to tell everyone about it?
Roland Fryer:
Yeah, we met in Tel Aviv, on a rooftop. I came and had coffee with you because I heard you on a panel the day before, and you were talking about the limits of impact investing.
And you know, this is not really my style, but I waited for the long line to dissipate after people wanted to chat with you. And I came up and I said, “I think you got this investing thing wrong, this impact investing thing wrong.” And you were like, “Oh, really?” And you were nice enough to ignore my ignorance, and invite me for coffee the next day.
And as you said, we really hit it off, and then we had a long and wide-ranging discussion of impact investing, how to make actual progress using capitalism, and we've been friends ever since.
[00:01:24] Michael Eisenberg:
Yeah, so for those who don't know, Roland is a Professor of Economics at Harvard University.
At age 30, he became the second youngest professor and the youngest African American ever to be awarded tenure at Harvard. He's an early-stage investor and Founder of Equal Opportunity Ventures. He was awarded a MacArthur Genius Fellowship and the John Bates Clark Medal given by the American Economic Association to the best American economist under age 40.
I think Roland is over 40 now, and he's still the most interesting and insightful American economist out there. He's also a senior fellow at the Manhattan Institute, and I can say an all-around great guy with an interesting story. And actually, I want to start with a story, Roland, which I know, but I think everyone needs to hear it because I think it tells a lot about who you are.
Can you tell us the story of how you persuaded your wife to go out with you? Your now wife.
[00:02:16] Roland Fryer:
That is not the story people typically want to hear, but I'm happy to tell you that one. We had a fellowship together. She's three years younger than me. After graduate school, I went to the Harvard Society of Fellows, which is Harvard within Harvard.
That's the most obnoxious thing you've ever heard of. But at that point, it was a good thing to have on one's resume. So I did it, and I was in my third year of the fellowship and really ready to leave. And in walks this wonderful, smart, beautiful woman, her first year of her fellowship. And she was so popular amongst the other fellows that I didn't get a chance to talk to her until April, you know, I met her in September across the room.
And so we were sitting in line for the restroom at one of the dinners. And I asked her what she did. She says, “Well, I'm a cancer researcher.” I said, “Huh, did you know, are you familiar with the work that shows that smoking actually reduces the chances that you'll get cancer?”
And she said, “That's the dumbest thing I've ever heard. It's just impossible.” I said, “Well, if I'm able to prove to you that smoking actually reduces cancer, how about we go out and have dinner?” And she said, “Sure, it'll never happen.” So the next day, early in the morning, I sent her a report that indeed showed that smoking reduced cancer.
And she says, “You know, this is from the tobacco lobby, right?” And I said, “We didn't talk about the quality of the evidence.” So she, with sympathy, agreed to have dinner with me anyway. And we went out one night and moved in together that night. And that was 18 years ago.
[00:03:58] Michael Eisenberg:
Unbelievable story. Thankfully, the quality of your regular research is much better than the quality of the American Tobacco Institute’s research. What made you think of that? ‘Cause I think this is actually instructive about who you are as a person. Like, you know, it was a bold comment to somebody who kind of knew and said, “Go on a date with me.”
And you use guile, for lack of a better term, and research to persuade her to go on a date with you. So like, what, how'd you think of that?
[00:04:31] Roland Fryer:
Well, I was teaching, or, I had taught an applied microeconomics course with a heavy dose of econometrics in it. And one of the examples I gave for a particular statistical methodology, and how it could go wrong, was this paper by the tobacco lobby.
So I knew it was deeply flawed. But you know, if you do enough of this stuff, Michael, you have a bunch of cocktail humor and random facts. And so it just so happened that this one came in handy. And as you said, we've been married for a while now, have two wonderful daughters.
So it was a good use of fast statistics.
[00:05:13] Michael Eisenberg:
Jeff Bezos is fond of saying that if the anecdotes contradict the data, check the data. You're probably looking at the wrong data, he says. Do you agree with that?
[00:05:30] Roland Fryer:
I don't disagree with that, but I'm constantly checking the data, right? So that wouldn't be the only thing I would check the data for. But after you check the data, if you think that it's comprehensive and robust, cleaned, then I wouldn't worry about the anecdotes at all. You know my view, the plural form of the word anecdote is not data. But, we have to be humble about what data can tell us sometimes and the quality of it.
And so oftentimes I will use that more qualitative approach, anecdotes, you know, things I see with my own eyes, to help me understand, and for lack of better words, feature engineer. So there's data, and then there's data. There are ways of constructing variables that will help guide one's intuition.
So I think anecdotes are really important for hypothesis testing. They're not helpful for hypothesis proving or hypothesis dismissing. And so there is a nugget of truth in that. But at the end of the day, the data wins out, but less, let's put it through the wringer. Let's be thoughtful about it and be careful, and anecdotes can help guide that discussion.
[00:06:56] Michael Eisenberg:
Let me tell you the reason I asked you the question. I have an observation about Roland that goes back to a story about you that maybe I'll let you tell, which is that sometime, I think in 2016, you published a working paper and your colleague said to you, “Don't publish it.”
You want to tell the story? Because I want to make my observation afterwards about something I think is super unique about you.
[00:07:22] Roland Fryer:
It's an interesting example. So I didn't know you were going there, but–in that case, we got the anecdotes wrong, right? Or we didn't see all the anecdotes. We didn't see all the anecdotes, right? And so in what you're referring to is in 2016, like the rest of America, and maybe much of the world, we were looking on and watching American cities be engulfed by riots. We had seen videos that were deeply disturbing. Deeply. You mentioned values–that, what we saw, cannot be a part of–what we saw was not a part of American values.
And I want to be clear. I went into this research thinking that I was going to be another instrument for those people who were out there protesting, but protesting is not my thing. In economics, we talk a lot about comparative advantage, right? And that protesting is not my comparative advantage, but data nerd-dom is.
And so I thought, this is going to be the easiest thing I've ever done in my career, right? Because up until that point, I had studied incentives for kids and other things that were not so popular amongst a lot of people. And I thought, here's my shot to do something that people might actually like.
And it'll be the truth. That'll be great. We'll get a twofer here. And so I thought I was going to download some data on the police, show that they were really biased, inject that into a time in American history, what was important, to add data analysis to the emotions, and lo and behold, that was going to be helpful and maybe even help the country heal a little bit. That's arrogant, but that's what I thought at the time.
I actually started looking at the data, and we collected millions and millions of data points. And–
Michael Eisenberg:
Data specifically on what?
Roland Fryer:
Sure. On police use of force. We collected data on uses of force that ranged from just putting your hands on someone during a traffic stop, to hitting them with a baton, to a fatal shooting and everything in between.
One data set, Michael was so sensitive, not sensitive–sorry about that–but one data set was so detailed that it documented when the police raised their voice at a civilian. Those types of uses of force. Okay. So when we actually looked at the analysis, what we found was that on lower level uses of force, the hands, and pushing people up against the car, there are large racial differences that cannot be explained.
And in fact, when we put them through a formal bias test, there is bias in the lower level uses of force. But that's not what people were protesting about. People were protesting about the lethal forms and lethal uses of force. And on those, we collected data from 10, 15 cities around the country. And for that, we found absolutely no racial difference whatsoever.
And that is a case, as you described, where we had seen a handful of videos that were really disturbing. And it was really discombobulated for a lot of people to see those videos, and then to read this analysis saying that there was no bias in police shootings. How could they not? I see it with my own eyes.
And the reason I say those anecdotes were misleading is because you know, I don't know the percentages off the top of my head, but roughly 50 percent of police shootings in America are of white suspects. And we hadn't really never seen many, if any, videos of whites in those types of situations. And so I thought, so that's a situation where anecdotes truly are that. They are not representative of what's actually going on in the data.
And so when we looked at the data, that's what we found. And you know, it caused a bit of an uproar, to say it mildly. But that's a case in which, not only did anecdotes go a different way than I expected, but my own personal views and personal experience was exactly opposite of what the data showed. My arrogance of them in the beginning, of thinking, “Oh, this is a situation where I'm going to put out some research that people actually like,” was the exact opposite.
[00:11:53] Michael Eisenberg:
But I would take a step deeper. How did you collect the data? How did you verify it?
[00:12:00] Roland Fryer:
Sure. A few ways. We collected multiple data sets. Some of them were public records that you could FOIA from police departments. So at that point I had what I called a FOIA machine in my lab. I had people just writing FOIA, which is Freedom of Information Act requests to police departments, to get data that they had to share with the public.
And then there were relationships that I just went out and talked to police departments, and told them what I was doing, and asked them for their data, and did ride-alongs, and tried to understand their perspective as well. And so we had partnerships with large urban police departments like Houston, Texas, where we get really, really detailed data that, you know, folks like, The Guardian or other, you know, Vice or other news outlets did not have access to.
And that's important because of all these things. Like other things, context matters. And so it's not enough to just look at summary statistics and say, this group is more likely than that group to be involved in officer-involved shooting. What you're actually trying to estimate is, at the time of an officer-involved shooting, holding everything else constant except race is the probability that that police officer pulls the trigger any different?
That requires a lot of data. Okay. And at that time, no one had that type of details in these types of data sets. And so that was the real problem, innovation, as it were, for this analysis.
[00:13:37] Michael Eisenberg:
So I want to pick out two words you use, because I think this is my thesis on Roland, and I want you to tell me if I'm incorrect.
So you talked about that the anecdotes are out there, like what I would call on social media. And, one of the things that I've observed is it used to be that maybe eight people were killed a day in New York City out of, you know, eight million people, which is a tiny fraction–nobody knew about it. But today if eight people are killed in New York city, eight million people know about it because it's all over social media.
So the social-media driven anecdotes, as it were, kind of take over the narrative of what's real. The two words you use, I think, that are critical, are ‘ride along.’ Which is, the thing that really struck me after some of our conversations, was you actually did the legwork.
You went out to police departments. You didn't just rely on the FOIA, the Freedom of Information Act, and download the data. You did the ride-alongs to collect the anecdotes that framed the data, and collected essentially more data in an interpersonal way. And in the kind of way I would frame Roland Fryer–I have gotten to know you over a bunch of years–is there's data Roland Fryer, and then there's hard work, and legwork, and personal anecdote Roland Fryer, where you actually go see it yourself, which contextualizes your insights, and I think makes them controversial. Because very few people actually go out and do the legwork, and instead kind of use the data to get us the anecdotes.
Is that a fair characterization?
[00:14:52] Roland Fryer:
I think it is. The latter part, the legwork, is what I really pride myself on. Anyone can download data. And so, whether it be school reform, where we went out and actually rolled up our sleeves and tried to take on schools, or the police, where we, where I embedded myself in police departments for, you know, days at a time.
But I want to be crystal clear. It's because I am–it is hypothesis generation, right, that I want to understand from the police perspective, what they believe they're maximizing, what they believe they're doing. And then you take that information, you go back to the data, and you form a set of statistical hypotheses that can actually be tested.
That's the key. And so for me, this is an unrelenting hunt for what the data had to say about a particular social issue. And my views don't matter. Michael's views don't matter–except to form hypotheses that can be statistically tested, right? And for me, and I've had this view since I got into the Harvard faculty, whatever, 25, 26-years-old, is, I'm willing to put everything on the table, and I don't understand people who are not.
What are you afraid of? I’m really just trying to understand where the truth lies, with a capital T. Now, some will be seen as impossible. Fine, we can have that philosophical debate later. Happy to have it. But I don't really see any other way, because my life experiences are different than yours or different from someone else's.
And so I don't like situations where the best argument wins. I like situations where the clearest and most compelling data, and the most rigorous analysis guides our beliefs.
Michael Eisenberg:
Where does that come from? Where does your–
Roland Fryer:
I think it comes from my grandmother. I don't know if I've ever shared this with you. My grandmother, you know, Southern parenting in the U.S. isn't known for its innovation. But my grandmother had a rule, and her rule was, ‘If you just tell me the truth, you'll never get in trouble.’ So that was my guiding principle. And, so I told her the truth, even when I really, even when I was embarrassed to do so, and she always lived up to it.
You know, you know, kind of spankings and all that kind of stuff for Southern kids–and I got my share of them, but never from my grandmother (and she was known for that with her kids). But she made a pact with me, ‘If you tell the truth, you'll never be in trouble.’ And that's been my view scientifically.
And I wish she were still with us, because I could tell her that not a lot of people have her beliefs, because sometimes when I tell the truth, I get in a lot of trouble.
18:52 Michael Eisenberg:
Why do people, and maybe this is unfair, but especially in academia–because you took a lot of heat for some of your stances–have such a hard time with the truth?
Roland Fryer:
I don't know. We can hypothesize, but I don't pretend to understand them, and I'm not sure I want to, I don't want to do the mental exercise it would require to try to understand what's that all about? I have asked before, and the kinds of things I've been told as well, people know what the truth is.
And so if the data comes in the opposite direction, the data must be bad. It gets back to your anecdote versus check the data comment. And I’m not as quite as dumb as I look. I understand data has problems and it's not always clean, and one has to think about those things. But that's something we can, oftentimes we can try to measure.
And so we can try to put bounds on what the data tells us, but to just ignore it, I don't understand. And you know, look, we're both capitalists. We also understand that people have a vested interest, it's oftentimes in a particular narrative or a particular story. So I think all of those reasons are behind why people, they're not ignoring the truth. It's just inconvenient for them.
[00:19:37] Michael Eisenberg:
Do you have a view on data and truth in the era of AI right now? Like how do you pursue truth in the era of AI and defakes, and…?
[00:19:52] Roland Fryer:
Well, it’s both good and bad. AI is going to supercharge those who want to do research in the right way. And oftentimes resources go to the people who do the most popular research that makes everyone feel good, who hold the purse strings.
But the cost of doing research has gone way down. The research I just described to you in 2016, that took a lot of capital. I mean, we had eight full time RAs running around. There's not a lot of faculty who have eight full time RA budgets, for eight full time RAs, right? In fact, the provost at Harvard at the time, I met with him right after I had done that paper.
And he said, “Well, I saw on a footnote that you had eight full-time RAs do the analysis, but then you were so surprised by the results that you hired another eight to have them redo the analysis.” I said, “Yes.” You know, I thought he was complimenting me. I said, “`Wow, yes, I actually did. I'm glad you saw that footnote and see how rigorous I am.”
And he says, “Yeah, that's one approach. But the other view of this is you have too many resources.” But now with AI, there's a lot more you can do. One individual faculty member can use the tools of these foundational models to write code a lot faster and produce research a lot quicker. So that's the good side. But the bad side is what you mentioned, which is the deep fakes. How do you know the data is real? And that's the thing we're going to have to figure out.
How do you actually understand and check the quality of data in the age of AI? How do we know it's actually real? What's the real source of truth? Is there going to be something to compare it to, so that we understand this data set’s not representative versus not–?
Those are the things that are on the frontier that we have to think about. But I'm hoping that the net result of this is yes, deep fakes are a problem, but that it will be positive on society because, you know, just like startups lowered the–and the foundational tools that came with the startup ecosystem, lowered the cost of people to start businesses–this is lowering the cost of people to do actual research and insights. And so hopefully the truth gets out there, net, in a larger way.
[00:22:29] Michael Eisenberg:
You mentioned earlier, and I think you've made a big point of this when Bari Weiss interviewed you down at the University of Austin, which I'll come back to later, that you choose data and research or you're a data nerd, over protesting.
We've had a lot of protests on college campuses, over the last, you know, nine months or so. Do you think data causes more change than protesting, especially–a lot of people seem to want to pay attention to data when it doesn't fit their narratives?
[00:23:04] Roland Fryer:
I think in the long run it does. Perhaps that's me being optimistic.
[00:23:09] Michael Eisenberg: God bless the optimists, man.
[00:23:09] Roland Fryer:
I have no choice. You can't do this work if you're not. I think in the long run it does. You've heard the quote that, you know, the arc of history is long, but it bends towards truth.
And I really do believe that. And so, I've proven that, you've proven, we're in this for the long, for the long game. And so, I have people now referring to work I did 20 years ago. And so, I believe you put it out there, and yes, in the moment, it may not be politically feasible for people to implement things because of protests or other current political realities.
But in the end, I do believe that as the base of knowledge changes, that it becomes harder and harder to ignore it. Not always. Your viewers are going to say, “Oh, here's five examples.” I got it. But, that is my fundamental core belief about lots of social issues, whether it's healthcare, education, etc.
You know, there is oftentimes moments where the political reality and the undergirded data come together. And in those moments we can do something special and make real progress.
[00:24:41] Michael Eisenberg:
So, how long have you been teaching at Harvard?
[00:24:45] Roland Fryer:
21 years? Something like that. I showed up on campus in 2003.
Michael Eisenberg:
Do you see a difference in the students today?
Roland Fryer:
Yes. For sure. Big differences. Big differences.
Michael Eisenberg:
I'd love to hear what they are.
Roland Fryer:
You know, here's a shining example. I think the students are really, really credentialed now, relative to 2003 or 2004. Yes, of course, they always had good resumes, but they are phenomenal when they come in. The downside of that is that they take less risk.
And so I don't meet a lot of students who have failed at anything. And I believe risk-taking and failing provides certain character traits that are good for the long run. And I think that that is one of the big differences I've seen over the last 20 years is that the amount of failure, risk taking, wearing failure on your sleeve like a badge of honor just doesn't happen that much anymore. You know, my oldest daughter competed in the math Olympiad last year, and she didn't do that well and she was embarrassed to tell me, I think.
Because I didn't really know she was competing. Neither did her mom, it's not just me being out of it. At any rate, at some point it came up over dinner. “Dad, I competed in the Math Olympiad.” “Okay, how'd you do?” “Not that great. You know, I did okay, but I didn't crush it.” I said, “Eleanor, did you fail at the Math Olympiad?”
She looked down, she said, “Well, I guess you could kind of put it that way.” And I leapt up from the table and I said, “Yes!” And I got on my phone, and I had a huge chocolate cake delivered to the house as quickly as I could get it. And we celebrated and we had a dance party.
And I think she was thinking, what is wrong with this family? And I said, “Failure is good! This is great! That means you're stretching yourself. That means you're pushing past your limits. I mean, you're learning about yourself.” And that is the thing that I have seen less and less of over the years. I find it sad because I fail 20 times a day, and that's just the way I've orchestrated my life. I'm always trying to push further than probably I'm supposed to.
[00:27:24] Michael Eisenberg:
You came from a rougher background than just failing the Math Olympiad, I think though, right?
[00:27:30] Roland Fryer:
Yeah, I didn't know what the Math Olympiad was until my daughter was in it
That's fair to say
[00:27:38] Michael Eisenberg:
Did anyone celebrate your failures when you were a kid?
[00:27:41] Roland Fryer:
No one celebrated the failures or the wins. You know, there was just not a lot of celebrating going on. You know, we–so the short answer is no. And it's interesting because depending on the type of failure, when I was a kid, it could be really catastrophic.
[00:28:04] Michael Eisenberg:
Yeah, for those who don't know, in, I think in Freakonomics they wrote that you were a “full-fledged gangster by your teens.” And then you became a footballer.
Roland Fryer:
Yeah, that's not true.
Michael Eisenberg:
Oh, that's not true?
[00:28:14] Roland Fryer:
Yeah, I'm not, I'm not a full fledged gangster, I mean, I don't want to scare my Harvard colleagues over here, I've already got enough troubles.
[00:28:21] Michael Eisenberg:
Yeah. Well, listen, you were an athletic star, right, playing football and basketball at the U of T?
[00:28:28] Roland Fryer:
Yeah, I was an athlete from five years old, loved that. That I excelled at, and I enjoyed it.
Michael Eisenberg:
And you flipped burgers also, right?
Roland Fryer:
Oh yeah, a lot of burgers. I flipped burgers.
I had every job you could imagine, Michael. I mean, my first job at age 12 was stripping furniture. You know, I would sand it down and get it ready to be reupholstered. I think I got paid $2 an hour, which was half of minimum wage at the time. That's probably generous, ‘cause he didn't really pay me some days. But I did that. That was at 12, and then once I got to be 14, I believe, I forged my birth certificate so I could work at McDonald's.
I had to wait till then because, on my birth certificate, the doctor signs, you know, whenever 1977, June 4th. And I needed to take the cursive four over. So it's a whole thing, but I did that, and I worked at McDonald's, and Pizza Hut. The worst job I ever had was waiter at a buffet. Oh, what a bad job, right?
Because you make half of minimum wage, because you’re wait staff, but no one tips you because it's a buffet. So I worked at the Golden Corral, man. That's a bad job. I've had every odd job you can imagine. But you know, it was because I wanted, I used to joke with people that I had real customer service needs. I was not convinced that being poor was a cool idea, and I wanted more.
Michael Eisenberg:
Is Eleanor going to flip burgers?
Roland Fryer:
I hope so. You know, I hope so. One of the big questions I have personally is, how do you get kids to have grit and resilience? And an internal locus of control when they don't go through a lot of friction in life?
And maybe that's too personal for a podcast, but it is something I think about a lot. So I find myself just randomly telling my kids “No,” just to see what happens. Should I just test them?
Michael Eisenberg:
And then buy the chocolate cake for failure.
Roland Fryer:
And then buy chocolate cakes for failure. Exactly. But yeah, I hope they do. I think you learn a lot of things in those jobs, right?
[00:30:56] Michael Eisenberg:
I agree. I sliced fish and cheese, and did deliveries for years. I came home smelly.
Roland Fryer:
Exactly.
Michael Eisenberg:
My mom would say, “Go to the fumigation room over there. Go get yourself cleaned up.” And, yeah, I totally get that. I wanna go back to Harvard for a second.
Roland Fryer:
Sure.
Michael Eisenberg:
So a lot of things have changed at Harvard in the last nine months, including Claudine Gay's resignation, after her appearance in Congress and things kind of went sideways.
And then you wrote an opinion piece in the Wall Street Journal titled, “Anti-Israel Protests and the Signaling Problem,” about college administrators bending to antisemitic and anti-Israel protests, through their signaling. What caused you to write that piece?
[00:31:43] Roland Fryer:
A couple of reasons. One, as you know, I have many friends in Israel, and I was watching what was happening. And I believe deeply, deeply in my bones about fairness. When I was a kid, I couldn't even watch a movie where the person who was doing the right things didn't come out on top.
Like it would bring me to tears as a boy. Like, I mean, it's just in my bones, man. And I felt like, rather than operating from a system of core values–here is what I believe, and I'm willing to lose whatever it takes for these core values–that college campuses, their approach would look more like this [signals with hands].
And I think that is deeply against what liberal education is all about. And so, it wasn't just what happened nine months ago that caused me to write that. It was over the last five or six years, I believe there have been opportunities to teach at institutions of higher education, and we have squandered many of those opportunities.
There've been a lot of big issues going on in the world. And rather than somehow sitting people down and having a communal discussion about these–whether or not they're consistent with our values–we have oftentimes, not all the time, but oftentimes, tried to figure out what the students wanted to hear, or that would be popular, and said those things.
And I just, I don't believe in it.
[00:33:58] Michael Eisenberg:
Why do you think that's happened in higher ed?
[00:34:04] Roland Fryer:
I don’t know. You know, this might be a real surprise to you, but I don't sit around with university administrators and hang out. I don't know what they believe they're maximizing. And so I don't know the answer to that. But it is an observation of mine, that, you know, if you take for example what happened here, as I understand it, with Ron Sullivan on our campus, I think that's a prime example where, you know, Ron is a defense attorney. He's a professor in law school. Ron had defended double murderers, because Ron believes, and many of us believe, that everyone has the right to a defense.
And then he became part of the defense team for Weinstein. And I'm not saying anything about–let's be very clear here. I shouldn't even have to say that, but I'm saying it anyway, it drives me nuts that I even have to say it–but he was on that defense team, and he was subject to some level of sanctions here at Harvard for that.
Now that, in my opinion, it was an opportunity to teach, to say, “Maybe we find this person's actions absolutely disgusting, but he is entitled to a defense. And so what is our, what do we believe about that? And how do we think about that? And how do you navigate that?” And to bring different faith communities together, to talk about how different faith communities might think about that. I don't know, maybe that starts to sound a little bit like leadership, but there's something there that I believe was a real teaching moment, rather than say, “I'm on the good team. They're on the bad team. Let's polarize.” There was a real moment, and there's been other moments like that, and the one after October 7th was another moment like that, where I just believe it was an opportunity for universities–and I'm not picking on Harvard or Yale or anybody else, I'm just saying universities generally, I don't know many who got this right–to really step up. And to teach. I believe, and this is just my belief, I believe students are here for more than just math and physics. And I don't believe they're here for political indoctrination.
I believe they're here to learn how to think about complicated issues. And some of that is math and physics. And some of that is all sorts of other types of social issues. And I think we're really good at teaching math and physics, but I think on these things that are things like, is there bias in policing in America?
Somehow our rigor that we put into the math and physics all goes away, and we figure out what's comfortable to say. And I think that is a real lost opportunity.
[00:37:27] Michael Eisenberg:
So, to ask you a bit of a controversial question–is there a role for these institutions of higher education going forward?
Roland Fryer:
Of course.
Michael Eisenberg:
That's valuable? Why?
Roland Fryer:
Of course.
Michael Eisenberg:
Why? If we can't educate kids in the hard issues of our time and see all sides of it–by the way, I should point out, just a side point on Sullivan, the Talmud says that if somebody is brought in a capital case before the court, and the court unanimously decides that he is guilty, he's innocent, and he must be released. Because if nobody can provide a defense for the person, it's obviously that this is like a cabal or it's–no one's thought this through properly.
Roland Fryer:
That’s interesting.
Michael Eisenberg:
And so he must be released, because nobody's defended him on the court. So unanimous opinions in capital cases under Jewish law, if he's unanimously convicted, he's actually released–which is similar to what you're talking about.
So why is there a role for higher education? Why should these institutions even exist at this point? If you can't do the kind of teaching moments you're talking about.
[00:38:29] Roland Fryer:
Because I think they create, I still believe they create value for many, and I believe there are really phenomenal professors on these campuses.
And I believe there are really hungry students on these campuses. And I believe there is a lot of good work going on. I am highlighting specific instances over the last five years where I believe universities didn't live up to their potential. But I think it's an extreme overreaction to say they have no value.
You know, they are the reason that you and I are talking today.
Michael Eisenberg:
No. The reason is Sender Cohen. He brought you to Israel, and we got a chance to meet. Let’s be very clear.
[00:39:19] Roland Fryer:
Yes. And the way that happened is, I showed up on one of these campuses and an economics professor said, I think, “You might actually be a pretty good economist. You might have a knack for this stuff. Everyone else in class looks confused, and you look delighted.”
Michael Eisenberg:
That professor, by the way, is Steven Pinker of Freakonomics, right?
Roland Fryer:
That is not true. But I think that it is, I mean, you get what I'm saying. I think if you look at the level of discovery going on and many of our science labs, I could go on and on, or, you know, my wife, what she's doing in cancer research and her colleagues is just mind-blowing. So no, let's not throw the baby out with the bathwater here, but let's fix it. Let's not go extreme, but let's fix it. And let's figure out–this is the harder problem than just throwing it away, Michael–let's figure out, how do you give faculty the incentives to be truth tellers? That's a much harder problem than just throwing it out.
[00:40:21] Michael Eisenberg:
What would you do to give faculty the incentives to be truth tellers? You're an incentives guy.
[00:40:31] Roland Fryer:
Oh, I am. I've thought a lot about this, but I don't have any great answers, because the problem is, you need a set of universities to do it. Not one individual university can really solve the problem.
Michael Eisenberg:
Game theory problem.
Roland Fryer:
Yeah, exactly. Because even if Harvard says that or Yale, and I just don't quite get tenure here on my merit, not because I said something controversial and the other set of universities haven't adopted that principle, then I'm going to fall.
So then it's still a really risky bet. If it's a risky bet, I don't take the bet in the first place. And so I think we need a collective culture change of focusing on the quality of the research, where quality is not defined by whether I like the answer.
[00:41:23] Michael Eisenberg:
Okay, listeners, Professor Roland Fryer is soliciting proposals for how we create incentives to change higher education such that people pursue truth and bravery in the face of, let’s call it common wisdom of what the students want.
We’ll get your email at the end, Roland, so that anyone can send you their proposals. And I'm sure it'll be flooded by several.
[00:41:46] Roland Fryer:
I love it. And here's the thing though. We don't want to prompt just knee jerk, you know, shocking things just because we think–that's fun too, but what we're actually looking for is the earnest search for truth. And those are different. But if anyone has any proposals to figure that out, please do. I mean, I think it is one of the most fundamental problems in higher ed right now.
[00:42:11] Michael Eisenberg:
So before I take a sharp turn to talk about an entirely different topic, I want to tackle two more quick issues. So you wrote a piece in the Wall Street Journal about DEI, which I would characterize as ‘don't throw the baby out with bathwater,’ which is that there were important things in DEI, even though it failed as a policy, basically, and we need to go out and find the talent wherever there is, because talent and intellect is pretty evenly distributed. Recently MIT released its enrollment statistics for the new years after the Supreme Court ruling that didn't allow using DEI criteria and there was a dramatic decrease in acceptance or enrollment of black students.
I just want to get your reaction to that, hear how you think about it.
[00:42:55] Roland Fryer:
Completely predictable, and I want to be clear that I don't think DEI as a policy failed. I think DEI as a term failed. I don't know what the policy is, because everyone interprets it differently. It's a term, and I'm sure there'll be another term.
There was a term before DEI. And all I'm asking for, I think DEI 3.0 or maybe 4.0, depends on your perspective, is to actually–you ready for this, Michael? This is going to–you asked for controversy. This is real controversial. Are you ready for this? What if it were just based on meritocracy?
Michael Eisenberg:
What if? Tell us. I’m in.
Roland Fryer:
Because if there's bias in the world, which I believe there is some, and you move from that to meritocracy, you get both more diversity and–not or–productivity. This is not trying to argue for the business case of diversity, which we can have a whole different conversation about.
This is saying, how objectionable is it to put the right people in the right jobs, doing the right things? That's all I'm asking for. That's what I was asking for in that piece. And so what do I believe about MIT's numbers or anybody else's numbers going down in the advent of affirmative action? I mean, duh.
We all knew that was going to happen, right? Like you could look at the conditional distribution of the SAT scores before the Supreme Court decision, and you could have predicted this. I think it'll be interesting to see if they really line up with what we thought the predictions were, but yes, of course, it was going to go way down.
Of course, what I have written about is, why don't we take a more proactive approach to solving the problem? Why wait until they apply and then say, “Oops, I don't think I have enough applications. Hmm.” Why not say, “Look, let's set up academies. The Ivy leagues can do this. Why don't we set up academies around the country and take students who otherwise wouldn't be in that grade of a high school, whose potential is being suppressed, right? That's not consistent with meritocracy either. And why doesn't Harvard, and MIT, and Yale and all these places, run academies, or fund the running of academies that can get people up to their standard, and then admit based on meritocratic admissions?
See, I fundamentally believe that if you start in sixth grade or ninth grade, and you give people–independent of the color of skin–true opportunities to, to get to their God-given potential, that those dramatic drops that you just described won't actually be there. And my view is that people who are against this don't think that minorities can actually do it, which is the real insulting thing.
I find it insulting that you think you have to wait and have a lower bar. Why not give opportunities earlier? Developmental opportunities and let people show you the talent that they really have?
[00:46:10] Michael Eisenberg:
I'm going to come back to that in a second. I do want to say something and then ask another question. I actually want to come back to that.
The thing I want to tell you is that the 8200 unit in the Israeli military, which is kind of the cyber unit and the technology unit in the Israeli military, which is, think of it as the Harvard or MIT for getting into the high tech system here, set up a program called ‘Magshimim,’ which means ‘self-actualization’, which goes into high schools around the country in the social periphery of the country, disadvantaged neighborhoods, and sets up programs for thousands of young people with a lot of potential and not a lot of access, to then track them to the 8200 units. It's been incredibly successful. I have a program like that myself called The Startupists here in Israel. And I think for whatever reason, because it's the military, they're able to do it in a more effective way than perhaps higher education is in the United States.
And here also, I should say, maybe because of that, you know, the main universities here have Programs all over the country are called “Children Love Science,” and it goes down into the ninth grade already and kind of pulls kids up to go do that. I find it fascinating. I'm sure it exists in some way in the U.S., but kind of the elite institutions for whatever reason don't go out to the neighborhoods and the communities to effectuate this. I find that difference fascinating. I'm not sure what causes it.
[00:47:31] Roland Fryer:
Yeah. I don't either. I think that, you know, they would say they go out there recruiting, but I'm not talking about recruiting.
I'm talking about developing. Right. And so for me. I believe that is part of the answer, that we create these academies and it makes sense for them to do it. Right? Look, I'm an economist. I understand public good like nobody's business. I get that, you know, Google shouldn't go take on computer science education, because they'll spend a bunch of money, and then the computer scientists might go to Apple. And so it's not a public good. That's a public good problem. But if the Ivies together took this on, then the marginal students that are helped by this, they're much more likely to choose the Ivy Leagues than someplace else, right?
Because there's much more of a clear hierarchy in the higher ed system than there is in corporate America. And so I don't understand why they don't take it on. They really don't. And so that's what I believe is a policy to help the MITs and the Harvards of the world not have dismal numbers when it comes to race.
You don't just sit back and put a little bit more effort into recruiting. You actually say, you actually believe–it goes back to our core values that we talked about before–that talent is distributed uniformly, but opportunity is not. And so if you're not getting what you're supposed to, that's a market failure.
Why don't you and the other Ivies team together and go try to solve that market.
[00:49:12] Michael Eisenberg:
Before we make the hard turn, I want to make to talk about investing. I want to ask you actually one question. I know you have a young Israeli guy, Nim, who we both know well, who works for you sometimes. And you're on Harvard campus.
Is it safe for Jews and Israelis to be on the Harvard campus these days? And will this year be safe for them, you think?
[00:49:29] Roland Fryer:
I believe they are, but are there cases where incidents happen. I believe so. As you know, I have spent some time trying to understand and help heal what's going on on campus here with our Jewish students.
I think some of the videos that we've seen of people being harassed are awful. I think many of the reports of how the students feel, I have the same reaction to. But, you know, do I believe they're safe? I do. I think that, you know, years ago, I think some black students felt the same about the campus here, and I had the same comments there. I believe that in general, yes, they're safe, but could the university do more, likely? And are there anecdotes of things that are happening that shouldn't happen? Absolutely. So, yeah, I'm hoping this year is going to be a lot better. But the proof will be in the pudding.
[00:50:40] Michael Eisenberg:
So I want to make a hard turn, and say, okay, here's a successful academic. He's won every prize. He's ruffled a lot of feathers through research, and produced what I think is absolutely groundbreaking approaches to research also, which is using your own eyes alongside the data. As trivial as that sounds, it turns out most people don't do it.
And all of a sudden you wake up one day and go, “Okay, I'm going to be a venture capitalist.” What the hell? Like, why are you doing that?
[00:51:10] Roland Fryer:
That's not quite how it happened. I used to have coffee with a guy named Bill Hellman for 29 minutes at Crema Cafe here in Harvard Square. And Bill was, and is, fascinated to me.
And so he was helping fund some of my research, and me being the arrogant academic, I rarely ever asked him what he did. I had no idea what, and he doesn't talk about himself, which is not true for typical venture capitalists. So I didn't really know what this guy did. And I, and I didn't, and I didn't understand finance that well.
I just, you know, but we'd have coffee all the time. And, and so I, the typical coffee would be, you know, 15 or 20 minutes of me saying, “Here's what I'm seeing in the world. Students are doing this. Schools are doing that. Police departments are doing this.” And it would be nine minutes of Bill saying, “You know, I met this guy who says that he can rent out people's sofas after The London Olympics. What do you think about renting people's sofas?”
And I would, you know, I was the worst venture capitalist. I would tell him quickly what I thought. “That's a terrible idea. In my neighborhood, no one rents a sofa. Who's going to rent a sofa?” I could go on and on. “What do you think about 120 characters?” “Oh,” I said, “That's the silliest idea I've ever heard.”
And so we had this incredible friendship, but then over time he started to tell me about companies that both solve things, you know, diseases like cystic fibrosis and had, you know, multi-billion dollar market caps. And so after that 2016 paper that you described earlier, I got really, really frustrated with the impact that academics can have, because that, by some measures, was a real academic home run that it had broken some download records, and it had a big profile in the New York Times.
And President Obama at the time, after the paper was released seven days later, I got a confidential email saying, “He needs you at the White House.” And so I went to the White House for five hours and met with him. And you know what? We got nothing done. Nothing. Not one of the 18,000 or whatever it is police departments changed anything as far as that. Zero, nothing.
Complete failure. No one bought me a chocolate cake either. You owe me a chocolate cake.
Michael Eisenberg:
President Obama owes you a chocolate cake! Good luck!
Roland Fryer:
For all those tax increases, more than chocolate cake. And so I was lamenting this fact with Bill. So it was one of these things where you go, I don't know if you do this–I went, and we were chatting, and I just laid it out on him. “Man, this is ridiculous.” And then he was telling me more and more examples over that conversation about how markets had, he believed, helped solve real problems. And I said, “Why aren't we trying that approach for poverty?” And another thing happened that really spurred me to this–my wife had this idea, which I don't even think I'm supposed to tell you, but she had this idea about early-stage cancer diagnostics. And when she published her paper in Nature, Google Ventures and other people called her up and said, “Oh, that's a great paper. I think we can make a company out of that.”
Now, granted, my wife's ideas are a lot better than mine, but no one's ever called me and said, “Wow, that's a really cool thing you did there. I bet we could build a company off of that.” Not one, zilch, zero, another failure, another chocolate cake. And Bill and I thought, maybe there is a market here to take real social science research, which I had been doing for, you know, 18 years at that, or 20 years at that point, whatever. And instead of having the product that comes out of research be adopted by the government, my hopes and dreams, or not-for-profits, maybe there's a for-profit market-based solution to help scale these ideas that social science has said that will really increase mobility. And so that's what Equal Opportunity Ventures is all about. You know me, I don't believe in impact investing. I don't, any time there is anything called a double bottom line or triple–every time I don't believe in the bottom lines, they become more–triple, quadruple bottom lines.
So after this conversation, there might be a quintuple. I just don't believe in it. I think that everything that–we decided to start a little fund, to see if we could invest in people and ideas, where the impact was embedded in the core product, so that all they had to do was maximize revenue, single bottom line. And that’s what we’ve been doing.
[00:56:18] Michael Eisenberg:
I think when we first met, you heard me say together with Jeff Swartz, “Market caps close social gaps.”
[00:56:29] Roland Fryer:
I love that, yes. I've been telling people that's my phrase since then.
[00:56:31] Michael Eisenberg:
Good. Listen, in venture capital, there's no plagiarism. That's only in the highest levels of Harvard.
I want to actually talk about two of your companies, because I think they're super fascinating. One goes back to what we talked about before about DEI. So, you've been talking about using AI to actually enable meritocracy in HR. Can you talk about that for a second? Because I think this is actually–it’s not an earth-shattering idea, but it's a mind-blowing implementation.
[00:57:19] Roland Fryer:
So, after the summer of 2020 with George Floyd, I got really, there's something about the corporate statements that came out, that they lacked authenticity in such a deep way. We care about that group. We care about this group. I mean, Expensify even told me who to vote for at some point.
And I just thought, this is bananas. And I called up, probably the best student I've ever had here at Harvard, and I said to her, “Why aren't they using the theoremsI taught you in your graduate course? You can solve these issues. There's a way to analyze talent and to build the algorithms to ensure that the people you hire and promote are most likely to be the most qualified. And maybe that's a true DEI strategy, that it's all about meritocracy.”
And she laughed at me and she says, “No one reads econ journals.” And I said, “Really?” It was news to me.
Michael Eisenberg:
Not even Google?
Roland Fryer:
Exactly. Not even Google. And so I said, “Well, why don't we substantiate those insights into actual software?”
And so that's what we've done, and we've had a phenomenal time doing it. And so we work with large enterprises, police departments, you know, restaurants and franchises to supercharge meritocracy in their hiring. And so one of the things that we can do is we can go in and run a series of proprietary tests based on my research, and the research of others, that can tell them all the ways in which they are not on their talent frontier, meaning they don't always put the right person in the right job.
And some of that will have to do with race and gender. And some of that will have to do with, you know, the allure of British accents. You just don't know. And so we can do that, and then we can say, “Hey, here is a hotspot over here where I really do believe putting AI, or ML, as part of a hiring check can really be helpful.”
So let me give you a concrete example. Food and beverage companies struggle a lot with attrition. Okay. So I talked to someone that’s from a large coffee chain last week, who said, “We've got 50 percent year over year attrition in our stores. And if we could get that to 35%, that's a game changer for our business.”
Okay. And so one of the things I've noticed is that A. there is a big disconnect between what corporate wants to do in terms of hiring and what the store managers actually do. And so to have fun, I applied to one of these places, you know, and didn't even get an interview. Couldn't get an interview.
I've applied to multiple fast food places, can't get one. I'm serious. I can't get an interview. And the reason that I've interviewed a bunch of store managers and the store managers–if you come into the store, in person, at the right time, and you can work on Thursdays and they need someone on Thursdays, you're in luck. Or they'll take referrals from the current employees. But doing what we do in corporate, which is having people apply, having that go to an applicant tracking system, having a systematic way to narrow that down to a slate and do structured interviews and then make choices doesn't always happen at the store level.
And so we built technology to do that first pass for them, that we're chatting with a big restaurant chain who literally only interviews three percent of the people, so we can use AI to interview the other 97 percent and not to make any decisions, but to then surface people that we believe are highly qualified.
So here's a really simple idea. Why don't you double down using data on the people who have the characteristics and traits of employees who have actually worked out in the past? And you know, sell short on the characteristics and traits that have not. And so what we can do and what we do with our clients is say, “Look, whenever someone applies, they go through our algorithms. We can give you an expected tenure and an expected performance based on historical data.”
[01:01:51] Michael Eisenberg:
And there is historical data.
[01:01:52] Roland Fryer:
Absolutely. And all those models are built specifically for the actual enterprise, because different characteristics and traits will work at Starbucks versus Pizza Hut versus McDonald's, okay, because of their different corporate cultures.
And so we're building all of that. And that, in my opinion, is a way to get DEI through meritocracy, because in our algorithms, race and gender, they don't come in. Okay. And so this is really about whether or not the things that you present at the time of application are correlated with performance.
And so that's what we've been doing. And it's been a blast, because we can actually work to ensure people are at their talent frontier. And I really believe that's a game-changer for folks.
[01:02:46] Michael Eisenberg:
One question I have for you on that is, one of the things that I've noticed about myself is when I meet people first on Zoom, or when I met them first on Zoom in the pandemic, we had a relationship kind of, and then I met them in real life and they were the much taller, much shorter, much–it felt like I was meeting a different person for the first time.
By the way, it just happened to me a week ago here. Someone I met during the pandemic, I ran into, he says, “You don't recognize me?” And I go, “I have no idea who you are.” And he told me his name, and I'm looking at him and it's like, “No, no, it can't be.” By the way, I talked to this guy five, seven times on zoom. Not like once.
And so I wonder if you select people using the data, then they turn up and shop for an in-person interview where, it can almost be shocking to somebody, Hey, this is the guy, this is the woman–it doesn't compute.
[01:03:38] Roland Fryer:
Could be, but you know, it will help you shortlist them, right?
That's why I say that the computers aren't, aren't choosing. But I believe in the data-driven method more than I believe that we should take the random person who comes by on Thursday who can work on Friday. And so, is it perfect? Of course not, but it is the best predictive thing that we have about who's actually going to be effective.
And so look, all those people that wouldn't actually get an interview in the first place get an opportunity. And so this is a way to increase opportunity, increase meritocracy. And by doing that, I believe, you know, whether it's hospitality or food and bev, or traditional enterprises, will fundamentally change their bottom line. Because I don't care what business you're in–you're in the people business. That's one thing I've learned, in early stage, venture ideas are one thing. Ideas are king as an academic, because you don't ever have to scale them. But in the venture business, ideas are important. Don't get me wrong. But they are people, businesses, all of them are people businesses.
And you know this better than anyone I've ever met. One of the key characteristics you're looking for. We've talked about this over dinner is can this person build a real team around this idea, because if it's them and their idea alone, and I don't think they can attract true talent, it's not going to scale.
And so all of these business, yeah, all these businesses are people businesses, and we're trying to help folks get the right people for them. Not, not, you know, hiring needs–HR analytics and hiring is stuck in the 1980s, if I'm being generous. It needs a fundamental re-envisioning, and that's what we're trying to do.
[01:05:29] Michael Eisenberg:
I also want to ask you about Forage, because you started Equal Opportunity Ventures–I think not only, but one of your kind of life pursuits is ending poverty, and certainly in 2024, the digital divide is a big part of that–and I think there's a kind of super unique insight with Forage that I want everyone to hear.
[01:05:47] Roland Fryer:
Yeah, we are, I think what they're doing at Forage is, is phenomenal. So our big deal is increasing social mobility. And at some point, maybe we can talk about how we actually measure that, because I'm dead serious about it. And so–
[01:06:05] Michael Eisenberg:
You’re gonna tell us right now when you tell us about Forage. That's exactly where I'm headed.
[01:06:05] Roland Fryer:
So Forage is a way to use, it's a third-party processing platform that allows people to use, we call them food stamps, but government assistance for food at online grocers, and they're even have point of sale now, through other partnerships, but this is really, really important.
When I was a kid, we were on food stamps, and it was, frankly, embarrassing to use. I mean, you probably don't remember that, or maybe you know it, I don't know. Food stamps back then were like, literally like monopoly, right? They were like huge bills. They didn't look like dollars. And so you'd show up to the grocery store, and you'd buy things, and you'd give them your big, huge dollar bills from the government.
And then they would always come over the loudspeaker and say, “Food stamps on aisle four.” And everyone looks and–its very different now. So the CEO of Forage used to be Head of Payments at Instacart and he, during the pandemic, they turned on the ability and he had some debates internally, whether or not there was a real market in term in grocery delivery for people who were on government assistance, and he was adamant that yes, there was. And that turned out to be really, really successful for them.
And he then left to go do that for everybody. And there are, you know, 200,000 online grocers, and there's a 75-page application and only six people at the USDA to approve those applications. And so this allows people to have this third-party processing and enables them to take food stamps as a form of payment.
And it's really, really, I'm so proud of them. I mean, they have big deals with big grocers and the idea is, now there should be no food deserts. Because Uber Eats and DoorDash and all these people deliver most everywhere in the U.S., certainly the urban centers, and any place it's urban and so, he has built software that allows people to go anywhere, to all these online grocers that sell groceries or what have you, and to use their government benefits in a way that has dignity, and is effective and allows them to get high-quality food, and what they need from anywhere. It also obviously pushes down on prices, because, you're not beholden to just the local grocery store. Now there's a big market for you to be able to find your groceries.
[01:08:55] Michael Eisenberg:
And it gives you digital connectivity, which I think is just, you know, ensures it, which is critical. I want you to take a minute and just tell us about the measuring of this social mobility here.
I think it's critical for people to hear. And then I'm gonna ask you one last question.
[01:09:11] Roland Fryer:
Sure. Yeah. We created this thing called the Social Money Multiplier. And so everybody's familiar with the classic money multiplier. You give me a dollar as an LP, and I give you back $5, I hope, .
[01:09:20] Michael Eisenberg: Hey, Roland's LPs, please listen in.
Here he is, just saying he's giving you back $5 before the social money multiplier.
[01:09:21] Roland Fryer:
And I'm sure Michael's giving his 10. Okay. So let's put us both on the spot. The social money multiplier is for every dollar invested, how much an expected income do we give back to society? And so that is a calculation that we take when we look at an investment, something like–let's take education.
Cause that's an easy one. Invest in education. And I know that for every standard deviation increase in test scores, that results in roughly 12 to 14 percent increase in income. So all I need to do is have them, if I understand this product and understand the impact it's actually having on test scores, then I can relate that to income.
Now, this measure we do for everything we invest in pre-investment. I want to understand what data we need on our side, not beholding the entrepreneur–there's no such thing as a double bottom line, I need them focused–but what data we need to ensure that we can actually estimate this. And so I am trying for five times money on the social side as well.
And so that's going to error me on things like finding people jobs. Companies like that. Because there's a very direct relationship between finding you jobs and getting you income. It's going to bias me against vegan restaurants as a way to increase social mobility. I like vegetables too.
But you buy your kids chocolate cake. But okay.
Yeah, that's a vegetable, isn’t it? According to my children it is.
But seriously, there's so many steps between a particular vegan meal and income, right? People can say, “Okay, healthy, you stay at work,” but there's a lot of steps. And so each one of those steps, it gets diluted.
And so that's how we calculate it. And so something like Forage, we look at what happens when the power of a dollar for food actually increases, and how that affects your economic mobility. And so that's what we did before that investment. We looked, it turns out that the government formulas are very close, whether you live in New York City or Tulsa, whereas the cost of a tomato in New York City versus Tulsa are very different.
So you can use that real value of the government assistance, the differences between the two, and relate that to mobility. And that's what we did, and we saw that there was a real steep increase. The higher the value of the food stamp, the higher the chance of social mobility. So we do those things for every investment.
And there have been investments that we thought would be really, really interesting, but we just couldn't get our heads around how it was going to actually impact social mobility. I remember one of our really young associates came in and said, “Oh, I got this scooter company that's amazing.” I said, “Scooters?” And he was like, “Isn't that literally mobility?” I was like, “That's
the wrong kind of mobility.”
So we are very, very serious about this. And this is, I prefer this relative–of course, terms like ‘wealth creation’ are really, really important. But that doesn't scale in the same way, right?
And so a lot of people are out there looking at, okay, well how many of this group or that group do they, do the companies I invest in hire? Or what percent of executives do they have of the people of a certain group if I invest in them? I don't like those measures. I also don't, we don't pay attention to the race and gender of the founder.
Yes, I think something like 70 percent of our founders are people of color or what have you. But that’s because they're solving problems that are familiar to those neighborhoods. And we believe founder-market fit is really important. So it just so happens that that's true, but we're not solely focused on that, right?
Like if you're putting a cat on the internet, it's a cat on the internet. I don't care what, who, what group you're from. That's still a cat on the internet, right? We're interested in people who are solving big social problems using technology, in ways that can scale fast.
[01:13:37] Michael Eisenberg:
Perfect segue to my second to last question, which is–so we talked about HR, we talked about access to food and keeping inflation down for those people and connecting them, and our mutual Abby who you're invested in, who's taken on housing, which is, I think another investment of yours. So three amazing areas, but what is the single biggest problem you want to solve through this market mechanism?
You must have an aspiration for the biggest problem you want to solve….
And I stumped Roland Fryer. Wow!
[01:14:14] Roland Fryer:
Wow. The single, man, it’s the single part, right? Like I can give you two or three, but I think that if I had to choose one, it'd be jobs. Workforce development.
[01:14:31] Michael Eisenberg:
But you’re doing that! What other plays would you make in jobs? Or is there a new area you haven't invested in?
[01:14:35] Roland Fryer:
I think there are many areas that are around alternative training to get people into really good paying jobs. And I'm not talking about–we've seen things where people come in and say, “Well, in five hours, you can become a pilot.” No, you can't. I believe there's real training. Our investment in Stepful is an example of that.
Michael Eisenberg:
Amazing company.
Roland Fryer:
Right? Getting people from Starbucks, and making them medical assistants or phlebotomists. That's a phenomenal company, but that's just in one narrow area. There's got to be a lot of those. Whether it's teaching people to do heat pumps or teaching people to do windmill techs, or whatever it is.
So I like things like that, where I'm very interested in companies like that. And you told me to mention one, I did that. Look, education is great. It's hard to have scalable, big returns on education, but the impact's really large. So we look for education things that oftentimes don't sell to schools.
So what Kaipod is doing is fascinating to me.
[01:15:40] Michael Eisenberg:
It's K A I P O D. It's super interesting. Check it out.
[01:15:43] Roland Fryer:
Absolutely. So, a few years ago, they had six kids and a pilot. And now, you know, I just talked to the CEO yesterday. He's got more than 50 schools and nearly a thousand kids, right?
Michael Eisenberg:
Amazing. He's incredible.
Roland Fryer:
Fundamental, what he's doing. And I think the other one is healthcare. I think health is really really important. And I think this is a place where academics have actually, they've gotten a lot of it wrong. They're focused on the wrong things here, and I think it's really about access to high-quality care and how we can use technology to ensure more people can have access.
I don't think that if you and I go to the same hospital and the same doctor, we don't have that different outcomes. The problem is, we go to very different hospitals and different doctors, so the question is, how do we solve it?
[01:16:25] Michael Eisenberg:
Okay, last question for you. Because we met in Israel, I feel like I need to ask you this question.
What has changed in you or your worldview after your trip to Israel?
[01:16:39] Roland Fryer:
Oh, man. That's a great one. I think the power of understanding the power of collective grit and resilience of people. I’ve had, as you know, several trips to Israel since we met, and it has altered my views of thinking through race relations in America, watching, you know, what has been built in Israel and what's possible.
And after my first trip, I think I may have told you this, Michael, after my first trip to Israel, after our coffee on your, rooftop, as soon as I landed in America, I called my best friend from seventh grade and I said to him, he says, “I haven't heard from you. What have you been doing?”
I said, “Man, I just landed from Tel Aviv.” He says, “Oh my God, how was it?” And I said to him, “Maybe black people should have taken Texas. Maybe, oh maybe, we should have had land of our own and built from there, and done a similar experiment.” I say that half-joking, but you get the idea.
I think it's really hard to have any type of empathy or understanding sitting over here and watching CNN, or listening to social media. You gotta, like anything, whether it's the police or education or healthcare, you know my style, you got to get over there, and shake people's hands, and hug them and get to understand their stories, before you can really, truly understand, and then get some data.
And so that's what's changed for me, is perspective, and an understanding, a deeper understanding–and not always in the ways you might predict–but a deeper understanding of how complicated the issues are in the Middle East. And you know, one of my top five moments of the last four or five years is being in, I think it was the West Bank with you, where we were talking to the folks who were investing in kind of Palestinian and Israeli companies to do joint ventures–I thought that was fascinating.
And so a lot of work that's being done over there–not just Startup Nation, but some of the collaborative work that I saw across different cultures and trying to use capital markets to help with that–I found extraordinarily inspiring, and I don't think people know enough of. So I think a lot of those things that just changed my perspective.
[01:19:25] Michael Eisenberg:
Roland, thank you for coming on the Invested podcast. You know how much I admire you and your work, and this was a real treat for me, a true treat to kind of do this semi live, people hear it. I really want to encourage anyone listening, you should, you should actually read Roland's research.
It's something I did after I met him. I encourage you to read everything he writes, whether it's in the New York Times or the Wall Street Journal, or online. You can find him on LinkedIn, Roland Fryer, R O L A N D F R Y E R. You're just an extraordinary person. And I feel completely enriched over the last five years to have considered you a friend. I hope you consider me the same.
[01:20:02] Roland Fryer:
Oh, of course, I do Michael and you're being way too kind. You're a legendary guy, and I really appreciate you bringing me on.
Michael Eisenberg:
All right. Thanks for coming Roland.
Roland Fryer:
See you Michael.
- [00:00:00] Intro
- [00:02:50] How Did You Meet Your Wife?
- [00:05:50] What if the Data and Anecdotes Don’t Match?
- [00:07:40] Roland’s Bombshell Paper on Police Brutality
- [00:14:22] The Need for Firsthand Experience
- [00:17:15] Roland’s Guiding Principle for Truth
- [00:20:22] Data And Truth in an Era of AI
- [00:23:08] Can Data Cause Change?
- [00:25:20] How Harvard Students Have Changed
- [00:28:17] Working Odd Jobs as a Kid
- [00:31:46] Speaking Out on Harvard Antisemitism
- [00:33:55] Is There Still Value in Higher Education?
- [00:40:52] How to Incentivize Faculty to be Truthtellers
- [00:42:53] The Misapplication of DEI & the Need for Meritocracy
- [00:49:50] Is Harvard Safe for Jews and Israelis?
- [00:51:20] Why Become a Venture Capitalist?
- [00:57:18] Use of AI for Hiring
- [01:05:50] Increasing Social Mobility with Forage
- [01:14:00] What is the Biggest Problem You Want to Solve?
- [01:16:40] What’s Changed in Your Worldview Since Visiting Israel?
Follow Roland on LinkedIn
Subscribe to Invested here
Learn more about Aleph
Sign up for Aleph’s monthly email newsletter
Subscribe to our YouTube channel
Follow Michael on Twitter
Follow Michael on LinkedIn
Follow Aleph on Twitter
Follow Aleph on LinkedIn
Follow Aleph on Instagram
Executive Producer: Erica Marom
Producer: Yoni Mayer
Video and Editing: Ron Baranov
Music and Art: Uri Ar
Design: Rony Karadi