Amir Efrati is Co-Founder and Executive Editor of The Information, a subscription-based publication for tech executives. Previously he was a reporter with the Wall Street Journal for nine years. Amirâs reporting on the dangers of self-driving cars have earned him numerous journalism awards, and he is a frequent contributor on CNBC, PBS, BBC, Fox and NPR.
As the Co-founder and CEO of Alation, Satyen lives his passion of empowering a curious and rational world by fundamentally improving the way data consumers, creators, and stewards find, understand, and trust data. Industry insiders call him a visionary entrepreneur. Those who meet him call him warm and down-to-earth. His kids call him âDad.â
Satyen Sangani (00:02): Innovation rarely happens out in the open. When youâre developing the next big thing, you have to protect your trade secrets. But often, innovations have both positive and negative consequences. Letâs take aviation as an example. When a plane crashes, there is almost always a devastating loss of life. Thatâs why the aviation industry has strict shared safety protocols. Itâs why we have black boxes that give investigators insights into what happens in the moments leading into an accident. With these protocols in place, everyone learns from tragedy, not just the party responsible.
(00:36): This tension between corporate responsibility and the public good is everywhere. When does a secret become something the public has a right to know? Is it Facebookâs responsibility to warn you that your teenage daughterâs Instagram account puts her at risk for depression and anxiety? Should you know when thereâs a self-driving vehicle in the lane in front of you? Where does our responsibility as corporations collide with our rights as citizens? And what responsibilities do companies have when thereâs no clear statute or law that identifies the losses related to your product?
(01:07): Today, weâre going to look at these issues and more with journalist Amir Efrati. Amir is the Executive Editor and co-founder of The Information, a subscription based technology newsletter for texting executives. His reporting on Uberâs corporate misstep and on the dangers of limits of self-driving vehicle development won consecutive Best in Business awards from the Society for Advancing Business Editing and Writing.
(01:31): Previously, Amir spent nine years at the Wall Street Journal reporting on technology, health, and criminal justice. He was the first to report on Bernie Madoffâs arrest and broke major news about the epic fraud. And federal judges cited his reporting on egregious criminal sentencing. His TV and radio appearances include CNBC, TBS, BBC, Fox, and NPR. Amir is one of the most knowledgeable journalists in tech today, and I canât think of anyone better to have this conversation with. So letâs dig in.
Producer Read (02:11): Welcome to Data Radicals, a show about the people who use data to see things that nobody else can. This episode features an interview with Amir Efrati, Executive Editor and co-founder of The Information. In this episode, he and Satyen discuss the Facebook Papers, controversies surrounding self-driving cars, and the relationship between data-driven decisions and transparency. Data Radicals is brought to you by the generous support of Alation.
(02:37): Alation empowers people in large organizations to make data-driven decisions. Itâs like Google for enterprise data â but smarter. Hard to believe, I know, but Alation makes it easy to find, understand, use, and trust the right data for the job. Learn more about Alation at alation.com. Thatâs alation.com.
Satyen Sangani (03:00): So Amir, I think a lot of folks that might be listening to this may not necessarily be in the tech industry and therefore may not have enough or as much acquaintance as I would with The Information. Can you tell us a little bit about the magazine and how you decided to start it?
Amir Efrati (03:18): Yeah, weâre a technology and media news publication that publishes articles that you canât read elsewhere, which means focusing on essentially proprietary information, proprietary stories or analysis that help people make better decisions, be more informed about a variety of industries. We think about it as âthe Economist for tech,â something like that. But with a lot of original reporting, really a focus on original reporting above all else.
(03:55): So we started in 2013, so I guess this is eight years now that weâve been doing it and have grown considerably. The whole company is probably getting close to 50 people. A majority of whom are on the editorial side, but weâve got reporters primarily based in the Bay Area and New York with some others scattered in place is like Seattle and London, as well as a Hong Kong bureau thatâs very, very important to us where we have people covering the Chinese technology scene, as well as how American companies, including Apple and Amazon do business in China.
Satyen Sangani (04:38): Steve Jobs talked about that reality distortion field that every technology founder or visionary would have to articulate in order to be able to get people to follow them. And of course, in the journalism profession, you have to, on some level, believe in that, but also have a skeptical eye in understanding those stories. How do you think about that balance? Because thatâs got to be an interesting balance to play where youâre both trying to deconstruct things, but not tear them down.
Amir Efrati (05:12): I come from a very old school reporting background. I was at the Wall Street Journal in the pre-Rupert Murdoch era. Itâs still a great publication, but I really, really enjoyed my time there and the kind of upbringing that I had. And it was very traditional in the sense of if youâre a reporter or journalist, youâre not an activist. Youâre not actively rooting for one side or another. Youâre just trying to understand whatâs going on. In the course of understanding whatâs going on, you will inevitably reveal hypocrisies that are just human nature and inevitably reveal fraud, which is also part of human nature and you see in every industry.
(05:51) I mean, as long as there are humans running things, there will always be jobs for journalists because thereâs constantly a dissonance between what is publicly known and what is privately known. And so, yeah, hopefully that explains it.
Satyen Sangani (06:05): Which leads us actually to one of the interesting stories that youâve talked about recently within The Information around the Facebook Papers. Thatâs the story where whatâs publicly known is different from whatâs privately known. And itâs a story where that company had a lot of specialized information. Can you maybe just start by telling the audience a little bit about what the Facebook Papers are if theyâre not acquainted with it and you know how that story came to be?
Amir Efrati (06:31): By now, Iâm sure many of your listeners would have heard of Frances Haugen, who was a Facebook employee working on the civic integrity unit, meaning a unit that was trying to improve the way Facebook managed discussions around elections. Things like misinformation about elections or politicians and how they were using the platform in a variety of ways, some of which are very unsavory. (07:05): And so we actually have been covering that unit and some of the research that it did, including determining that blatant misinformation â Iâm not talking about opinion, but blatant misinformation by a politician â is much more likely to be believed or at least somewhat more likely to be believed by a layperson versus other users who are spreading blatant misinformation. And there were a lot of questions about what should they do about that and how do they deal with politicians who up until that point were essentially given a blank check to say whatever they wanted, because obviously Facebook is a platform that does prize and has prioritized freedom of speech.
(07:48): So we were covering that unit. We did not know about Frances. She became very disenchanted with what was happening across the company, as it was responding to and reacting to a variety of crises that had intersected with the platform. She eventually linked up with a Wall Street Journal reporter named Jeff Horwitz who worked with her, and she took a whole slew of photographs of all kinds of internal research and internal discussions about every topic under the sun and gave that to him. They published a series based on it, which touched on everything from this kind of blank check that I talked about, which is essentially giving a free pass to some users, including politicians, to say whatever they wanted, even if it violated Facebookâs rules around violence or hate speech and things like that. It gave them a free pass to say those things versus other users who were being blocked, and nobody knew about that. That was one element which kind of followed our reporting, but they got into other areas. I think the most publicized arena was the Instagram research; thereâs still plenty of debate about that and what came out of the Wall Street Journalâs reporting on the Instagram research, because it did show that for a sub-segment of Instagram users who were teenage girls who already felt bad about themselves, Instagram made them feel worse.
(09:16): But I think the series did make quite clear that there was not a lot of transparency around the decisions that Facebook was making about who can say what on the platform. Then the second really big area was pounding home the point that content moderation or efforts to reduce hate speech or violence or blatant misinformation ⌠there was just not a lot of effort happening outside of the West and outside of the United States to try to control this kind of violent or hate speech. There was just a big difference in how much effort Facebook was putting in into it, whether it was in Arab countries or other Asian countries. Countries where Facebook was relied on a lot more for societal discourse and certainly relied on more by politicians to get the word out, there was just a lot less moderation and rules and enforcement of rules. Iâm really glad that thereâs more transparency around that.
Satyen Sangani (10:26): When events like the capital insurrection occur, transparency seems like it would become a necessity.
Amir Efrati (10:31): Frances Haugen, this whistleblower, she felt like the Journalâs series wasnât enough, so she wanted to create a consortium of publications that would get access to all the same documents and try to make sense of them or come up with stories about them. She did that. We were not part of the initial consortium. Actually, we wrote an article about the consortium before it was even a known thing. After it came out, the first set of stories were around the January 6 insurrection and to what extent Facebook could have prevented some of the groups around that insurrection from forming or planning around.
(11:11): We ended up sifting through a lot of these documents to really understand how Mark Zuckerberg runs the company and how heâs really the central decision-maker for just about everything, certainly when it comes to speech issues and how they try to reduce harmful speech, harmful content as Facebook defines it, as well as some of the newer frontiers, like virtual reality (VR), and the difficulties that Facebook â or Meta, I should say â will inevitably have in controlling harmful content and abuse on something like VR, where people are interacting with each other in real time, verbally ⌠really difficult to deal with that. People are going to be people.
Satyen Sangani (11:53): I think itâs super interesting because Facebook, for our audience, which is a set of people who would like to transform their companies to being more data-driven and be able to understand more about their consumers and understand more about their businesses so ostensibly they can influence behavior, in some ways, in the exact same way that Facebook is currently doing. Youâve got kind of this interesting tension where on one hand, somebody could say, âWell, look, what Facebook is doing is obviously bad. Theyâre horrible people. They know theyâre making one-fifth of girls on their platform clinically depressed.â
On the other hand, youâve got another potential argument, which is, âWell, hell, this is capitalism â laissez faire. Letâs let them do whatever the law permits them to do.â And thereâs this massive gray area in the middle. But then thereâs a whole bunch of people who are in a position inside of Facebook, inside of these companies, where theyâve got to make decisions between these two realities. What would you say to them? How do you think about advising them on doing their jobs when they have this great power, but this great responsibility that comes along with it?
Amir Efrati (13:01): You talk about the gray area. I think you can make an argument that Facebook is a net positive for most people and most users. They really enjoy it. They form groups around gems and rocks. They might even meet their significant other through that. Iâm actually thinking of a very specific example in my own personal circle. Thatâs not being talked about. Thereâs a lot of evidence to suggest that people do like Facebook or Meta and its products, but because Meta has fumbled so often its public relations around the objectively harmful things that do sometimes occur, I am actually really glad that some of this material came out, as uncomfortable as it is for that company. I think more transparency is better, especially for a platform thatâs used by billions of people.
(13:54) But to your other point about analytics and research, it really, really brought home the idea that Facebook has absolutely invested a lot in trying to understand its own products and the impact they have on the world. I think the conflict is whether they act enough on or prioritize it enough. This is the gray area that weâre talking about. If you look at the Facebook Papers, and unfortunately, I donât think weâre yet able to share them directly, broadly, but Iâm sure at some point theyâll be publicly released in full because theyâve gone to Congress, it really is amazing how much detail the researchers did and continue to do research on all the ways their products work. It is really, really astounding, and I think thatâs something that even Frances herself was talking about as something thatâs very encouraging, because at least they were looking into these things.
Satyen Sangani (14:51): Data gives us the power to influence human behavior in an incredibly targeted way. Itâs a concept historian and public intellectual Yuval Harari has dubbed âhacking humans.â But how much influence is too much, and what is the responsibility of organizations to recognize that influence?
Amir Efrati (15:10): This is a never-ending conversation, and I think at the end of the day, it starts with: how are we educated, and how do our brains develop today, whether we are teaching enough people to be able to think critically or to be able to listen to a variety of viewpoints? Then you can certainly get into discussions around how we are arbitrarily and unnaturally being divided when just in the United States, the difference in the wants and desires of somebody who defines themselves as liberal and the wants and desires of somebody who defines themselves as conservative or Republican â not that different, but we are totally divided and being taken advantage of. That is certainly a depressing aspect of our society. It really doesnât need to be that way. People do not realize that they are not actually that different. But certainly when it comes to these online platforms, the worst of ourselves comes out. People are very quick to jump at each other, to view each other as two-dimensional. And thatâs not Facebookâs fault: that is human nature, to your point.
Satyen Sangani (16:12): And then on top of that youâve got this other issue, which is influencing peopleâs thoughts and behavior. And in many cases, for the people that this podcast is for, thatâs their jobs, theyâre marketers. And so, they want to try to influence peopleâs behaviors. And so, one, there is this aspect of an unnatural influence and a not-quite-human interaction. And two, there is this aspect of, well, how much influence is too much influence, and what is the limit, and how do I even determine what the line is and where the line is, and who tells me where the line is? Because thereâs no law against this stuff. Those are questions that I think all of us have to struggle with in our jobs because, on some level, weâre all trying to change peopleâs minds.
Amir Efrati (16:55): This is one reason why I like the office. Human interaction, face-to-face interaction, itâs everything. Youâre never going to convince me that not being together doesnât cause problems that otherwise wouldnât happen.
Satyen Sangani (17:12): Social media has changed our lives. Self-driving cars claim they will do the same, but how much do we really know about them? Amir has been covering this section of the automotive industry since its early days. And like Facebook, itâs something we donât know nearly enough about.
Amir Efrati (17:27): One of the reasons I got interested in self-driving car development is because Google was doing it, so I actually got a ride in one of their prototypes with Chris Urmson, who was there at the time, back in the day when they were saying this is going to be a solved problem by the time my kid is 16 and has a license to drive. They wonât need to drive. Because it did look tantalizingly close, and it was a very logical assumption to make because youâre like, âOkay, you put a bunch of sensors on a car. Those sensors can see way better than humans can, obviously, so it stands to reason that the cars can be safer.â Unfortunately, that did not turn out to be the case. As I started to follow it very closely, it became very clear, very early on, even in 2017, that the claims that developers were making were just didnât comport with reality.
(18:18): They were saying we donât have to take over very often, and our cars are nearly perfect. And that was just BS. A lot of people built very compelling demos. I donât want to call it demoware. Although, I think in many companies there was demoware, which was a system built for a very specific track and a very specific demonstration, but not something that could be used in a general sense to automate a vehicle on the road. Part of the reason why I was writing, I donât want to say critically, but just very honestly about what was happening there, including publishing data on how these companies were rating themselves and their own technology, is because there was this dissonance between what was privately being shared and what was publicly being announced. So thatâs an easy thing to do right there: letâs just narrow that gap because that doesnât make any sense.
(19:13): People are making life decisions, going to work for companies based on completely false assumptions. Thatâs not good for anyone. Investors are investing pension money and other peopleâs money in companies based on whatâs publicly announced and not whatâs privately known. Thatâs not good, either. There was just a notion of being realistic about it.
Satyen Sangani (19:33): Amir believes that companies in the self-driving car business must be more transparent with each other.
Amir Efrati (19:38): And I think there was a lot of frustration, and still a lot of frustration, in the field of self-driving cars, where thereâs not a lot of data sharing happening between companies about best practices, and what they could do better, and how they can avoid having a weak link that ruins it for everyone else. And by that I mean, if you look at a parallel, like the aviation industry, one of the things the aviation industry did was come up with a set of safety standards, which are very difficult to do with self-driving cars, because weâre in such an ascent stage, but they at least came up with a framework for discussing really serious problems that had come up. I think thereâs a lot of frustration, because thereâs a lot of low-hanging fruit that could be picked off to solve some basic problems so that nobody is ever put in harmâs way the way they were.
(20:25): Thatâs one one really interesting thing is all these data silos that have been created because of competition. But because itâs something that touches public roads, public infrastructure, I think it really brings it into the realm of, thereâs a duty of transparency here. Because this is stuff thatâs on my street right now. Iâm in downtown San Francisco talking to you. Right now, at this moment, there are probably at least two dozen self-driving car prototypes running around. The rain has stopped. The atmospheric river has ended. So there are prototypes running around all over the place. I think it behooves these companies to explain to the public what is actually going on. Thankfully, for the most part, they are operating these things safely. But anyway, it was a really fascinating realm to look into. And I am deeply supportive and really hope that these companies solve the problem, but the problem still remains. And itâs very unclear that experimenting in this way on the public roads is a good idea. Theyâre certainly a question mark as to whether itâs a good idea in putting it into regular driversâ hands.
Satyen Sangani (21:26): Itâs an interesting case study though, because unlike the Facebook Papers case where youâve got lots of understanding and analysis and knowledge, in this case, youâve almost got the exact opposite problem, where youâre really still trying to understand the problem set of what the real world looks like and what the interaction is between this technology and the real world. And it feels like here youâve got almost an analytical process and data-transparency problem that, because of the profit motive, people are not willing to even do that level or have that level of transparency. It feels like, in some sense, the problems are a little bit farther upstream in the analytical process, but still hard because people donât quite know where to share, where not to share, what their enterprise would not tolerate.
Amir Efrati (22:21): They wonât even share what they measure.
Satyen Sangani (22:25): Right.
Amir Efrati (22:25): I think there is a parallel with Facebook there. Again, weâre learning about metrics. Weâre learning about ways to measure the impact of a product and how to form data around it. Whatâs interesting about the Facebook Papers is thereâs an area of subjectivity around speech, and freedom of speech, and what should people be allowed to say. And itâs not lost on me. And it shouldnât be lost on anyone else that the theories around the lab leak with COVID-19 being blocked early on in the pandemic now looks like a horrific decision. And Iâm not saying I was supportive of it at the time. I had no idea. I didnât know what to think about it. I think over time, the idea, the notion that this could have come out of a lab became a lot more sensible. But now looking back at it, it makes you go, âMaybe restricting that speech is just bad form.â
(23:20): And we should be worried about that. What else do we not know? There are plenty of similarities with self-driving cars in terms of what we donât yet know. Self-driving cars, we at least know that we shouldnât kill people with these vehicles. We know that we shouldnât get into a situation where we might kill people or might hurt people. You can certainly start to establish a measurement around that. Thatâs a lot more tangible and physical in nature. But both are fascinating realms, where there was really not a lot of transparency. And when there is more transparency, I think everyoneâs better off.
Satyen Sangani (23:57): Amir and his team have also begun to pay close attention to the cloud for some very important reasons.
Amir Efrati (24:03): We care a lot about cloud software, cloud infrastructure. It is one of the most under-covered underappreciated areas, Iâm trying to hire more people to cover it, because Iâm quite interested in it. Trillions of dollars of decisions being made, and again, great example, anytime we give people a little bit of transparency into the very, very difficult data problems or cloud infrastructure problems that companies have, our readers go crazy for it because they would not know otherwise.
(24:37): They would not know that Uberâs internal systems went down for a whole day and why that happened, and the engineering culture that led to that, although it has improved remarkably since we wrote that story in 2015, to more recent stories around the costs of using cloud providers like AWS, costs that are sometimes not well-controlled by the companies and they get hit by a lot of surprise bills. So how do they keep a lid on that?
(25:04): When we write about how a company signs a really, really big cloud computing deal, letâs say with Google Cloud or with AWS, but then proceeds to not spend any money after that, because itâs actually very difficult to get their IT department to agree with some of their developers, and to be able to use some of the tools of the cloud providers, because maybe the company isnât ready for microservices or something like that, that is massively instructive. So I think that thereâs a lot more transparency that I think we could all benefit from. I think a lot of CTOs and CIOs are very reluctant to share their heartaches, which is, I guess, an opportunity for a publication like ours.
Satyen Sangani (25:45): And I think an opportunity, I mean internally within these companies, because whatâs so fascinating now is that thereâs so much complexity, and hell thatâs even true inside of Alation. I mean, we have software that depends on other third-party software. And if there is a bug in that third-party software that may have a security breach, weâve all heard about this thing called Log4j, that affects and has ripple effects that you may not necessarily anticipate when youâre building the software. And so then we all hire analysts internally to better understand our businesses internally, even though in theory, we should know all about them.
(26:18): And youâre kind of doing the same thing, but from the outside, so how do you think about this world of specialization? I mean, Jessica wrote about it and talked about what could be. I think there was this notion of the end of journalism, but how do you think about this problem of hyper-specialization? What are the implications, how do companies need to respond? How would you want listeners to think about this problem? Because understanding this very complicated world seems like it isnât getting any easier, and yet thatâs kind of the task thatâs at hand for all of us.
Amir Efrati (26:55): Itâs kind of amazing how few CEOs outside of, letâs say Silicon Valley, really understand how the companyâs infrastructure works, and what does it mean to have a hybrid cloud, or use both internal data centers and data center providers like AWS or things of that nature? They just donât know it. So it can create a lot of challenges if theyâre not adequately outsourcing or allowing key decisions to be made by the people who really understand it.
(27:29): So I guess in the journalism context, the story that youâre talking about was a column that was very well read, and it was making a very simple point, which is that whether youâre talking about what weâve just discussed, trying to make sense of the impact of the Facebook app, or trying to understand the development of autonomous vehicles, or trying to understand what is cryptocurrency and Web3 applications, these things are at this point, very complex, very specific. And if you donât know how to go down that rabbit hole and be able to ask the right questions â and be able to have an appreciation for the problems that people are trying to solve â then youâre in for a lot of pain because you canât apply super-general lenses to
(27:29): So I guess in the journalism context, the story that youâre talking about was a column that was very well read, and it was making a very simple point, which is that whether youâre talking about what weâve just discussed, trying to make sense of the impact of the Facebook app, or trying to understand the development of autonomous vehicles, or trying to understand what is cryptocurrency and Web3 applications, these things are at this point, very complex, very specific. And if you donât know how to go down that rabbit hole and be able to ask the right questions â and be able to have an appreciation for the problems that people are trying to solve â then youâre in for a lot of pain because you canât apply super-general lenses to everything; you really do have to start to specialize, as you pointed out, to be able to write and think properly about all these things.
Satyen Sangani (28:32): âWith great power comes great responsibility.â Itâs an iconic line from the original Spider-Man movie. Theyâre the dying words of Peter Parkerâs Uncle Ben. Those same words become a superheroâs guiding principle.
(28:45): And as I reflect on my conversation with Amir, maybe this phrase should be Data Radicalsâ guiding principle as well. Technology has given us the power to aggregate and quantify the world around us. We can create innovations that change our world for the better, but we also need to remember that our innovations can have consequences that we donât originally intend. And even if it never comes to pass, itâs helpful for teams to think through both the potential successes and the potential failures from introducing a new technology.
(29:13): Thank you to Amir for joining us for this episode of Data Radicals. This is Satyen Sangani, co-founder and CEO of Alation, signing off. Thank you for listening.
Producer (29:22): This podcast is brought to you by Alation. Chief data officers face an uphill battle. How can they succeed in making data-driven decision making the new normal? The State of Data Culture Report has the answer, download to learn why successful CDOs partner with their chief financial officer to drive meaningful change..
Season 2 Episode 25
Sanjeevan Bala, ITV's Chief Data & AI Officer and DataIQâs most influential person in data, embraces the ubiquity of data in the enterprise by embedding domain-specific data âsquadsâ within business units to localize decision-making. He discusses how, unlike monolithic data teams, meshy data organizations are the best way to align data initiatives with business value.
Season 2 Episode 9
Ashish Thusoo has been on the leading edge of a data culture, whether itâs as a founder of a data lake startup, developing the Hive data warehouse at Facebook, or in his role as GM of AI/AML at Amazon Web Services. This discussion traces the evolution of data innovation, from big data to data science to generative AI.
Season 1 Episode 17
How do you change culture? First, speak softly and carry great data. Second, master tips from Tableauâs CDO on how to be a data changemaker and people influencer, with a steady eye to your org chart.