Henri Stern is building Privy - a suite of API tools to store and manage user data off chain. He was previously a research scientist at Protocol Labs and worked on Filecoin’s consensus protocol.
Here is my conversation with Henri Stern who is building Privy.
Henri was previously a research scientist at Protocol Labs and worked on Filecoin’s consensus protocol. And after many years of thinking through problems related to data privacy and security, he recently co-founded a new company called Privy where they provide a suite of API tools to store and manage user data off chain.
In this conversation, we talked through a set of topics that Henri has a unique point of view on — starting with the question around the seeming trade-off between privacy/security on the one hand and UX/convenience on the other. We talked about principles he has in mind in designing an off-chain data system; how privy does encryption and key management; how they do permissioning; and how they think about data storage.
Timestamps:
(00:02:30) - designing the product/protocol roadmap
(00:10:30) - privacy/security vs. convenience
(00:19:27) - building an web3 application
(00:23:20) - decentralizing Privy
(00:32:09) - key management architecture
(00:46:11) - verifiability, transparency as a disinfectant
(00:59:02) - building a product with private data
(01:07:08) - cofounder relationship
Into the Bytecode:
- Sina Habibian on X: https://twitter.com/sinahab
- Sina Habibian on Farcaster - https://warpcast.com/sinahab
- Into the Bytecode: https://intothebytecode.com
Disclaimer: this podcast is for informational purposes only. It is not financial advice or a recommendation to buy or sell securities. The host and guests may hold positions in the projects discussed.
Henri Stern: Privy, building for data privacy and security
Sina [0:00:18]: Hey everyone. Welcome to another episode of Into the Bytecode. Today, I sat down with my friend Henri Stern.
Henri is absolutely brilliant. He used to be a research scientist at Protocol Labs and worked on Filecoin's consensus protocol.
And after many years of thinking through problems related to data, privacy and security, he recently co-founded a new company called Privy where they provide a suite of API tools to store and manage user data off chain.
In this conversation, we talked through a set of topics that Henry has a unique point of view on starting with the question of the seeming trade-off between privacy and security on the one hand and UX and convenience on the other.
We talked about principles he has in mind in designing an off-chain data system. We talked about how Privy does encryption and key management, how they do permissioning, how they think about storage and the middle parts of this conversation got a bit more technical than other conversations we've had on this podcast.
But I think we took the time to explain the different puzzle pieces and I personally learned from this conversation.
And with that, I'll leave you to it and hope you enjoy.
Henri [00:01:41]: One of our differentiators as a company and as a product in the space has to be around ease of use because too much, sort of, protocol-first tooling in this space is so hard to use.
And I think unless tools come out that make it easier for developers to take on user data privately in the next year and a half, the default is, “let's dump this shit into Postgres.” And so, we've taken the tack that basically we're going to start off way SaaS-ier than a lot of folks in web3 in order to build for an easy experience.
But obviously I think that also means that it can be really easy to lose sight of where the company goes, which is why building that sort of product roadmap felt really important. Because let's remember that the sort of end goal is self-sovereign identity in a very real way. Like users should control their data. Even if we start with developer control in mind.
Sina [00:02:30]: If we're kind of talking about the web3 ecosystem, a lot of people take this point of view of designing a protocol first and starting from the most idealistic version of the future that we're trying to see, and this kind of inadvertently creates a multi-year roadmap of building these technical puzzle pieces that have to all come together to result in a good user experience.
And one thing that I appreciate, you very intentionally with Privy made the decision to start from where the world is today - what the user experience is that you're trying to create, what developers can actually integrate into their applications, and then have a longer term roadmap about how you can back into more of a kind of like open protocol.
Henri [00:03:20]: Yeah, I think there's three main reasons for it. At least I'm going to try to make three up on the fly, but, I think this is true. And the three of them are first a question of personal preference. How I like to build. The second is a question of, I think, emergent behavior in this space.
And what happens if you try and build something too far out in the future? And the third is I think the fact that there's urgency around this because of what happens if we don't build this quickly and the time it takes to build protocols. And so, maybe going a little bit deeper into each the first was personal preference.
Like ultimately I think one of my frustrations with protocol building is the fact that the decisions you're making and, you know, to be fair, I was working on the consensus protocol for a layer one blockchain. So this is, you know, kind of the bottom brick of a lot of this stuff, but the decisions I was making with regards to, block time or the number of blocks that could be admitted in any given round - things like that had a product effect on a multi-year timescale. And I think I really missed being closer to users. What does it mean in web3 to build with the user in mind? And it seems like there's a pretty wide divide between consumer products and web3 and developer tooling and/or infrastructure and web3. And I wanted to get a little bit closer to a place where we could talk to developers we were building for and get feedback on like a daily, weekly, monthly basis.
So that was, that was the first point of why we've opted to build in this way, which is to say, to build a much SaaS-ier product than the protocol. The second reason is a better measure of behavior, which is to say the market is changing so quickly. And I think we can set a flag for, you know, this is where we hope to be in 10 years.
This is what self-sovereign data should look like. And this is what it should mean to be a user sort of surfing web3. When it comes to your personal data and the experience you have around giving people permissions and access to your personal data or giving protocols, permissions, or access to your personal data.
And yet, there's a bit of futility in trying to build the tenure thing because the market changes out from under you. And basically I did not want us to be building sort of a religious vision for, you know, “follow our path to the holy land and we will guide you.” But instead it had to be a tool that could be useful to developers into this space yesterday.
Because so many of my friends I was talking to were sort of saying ‘I've built this prototype it's still it's a Dapp. And I really like it, but also I'm never going to ship it because to make it any good, I'd have to take on user data, which I don't want to do.” And I kind of felt like there was some tension between basically the 10 year journey path, where we made a promise of, you know, if you suspend this belief and you build with us for 10 years, this is where we get you versus there's a real need for developers today that we can try and solve.
And even if that means being a little bit less perfect in what we ship up front, we can ship something that's useful tomorrow in a way of protocol. Maybe we'll take a little longer. And the third is the urgency, which is again, I think web3 is, is a bit of a knives edge, right? Where you've basically got two schools of thought:
Now you've got sort of the web3 OGs who were like, yeah, this, everything happens between the client, the front end and the chain. But basically there is no backend. To web3 products, there is no cloud or sort of a state being kept anywhere, but on chain. And then you've got this sort of new entrants in this space where we're used to web2 experiences where maybe you're coming from what to, who were like, why can't build a good product at all so fuck it.
I'm going to go back to what I know. And I'm gonna drop user data into a database, into a warehouse or something like that. And I really worry that basically users I think have shown that they will choose convenience and delight over privacy because privacy is such a nebulous promise. And so unless we can sort of help level the playing field today and make it easier for developers who want to protect their users to do so, I really worry that basically the space is going to take a left turn in the coming year and a half and move towards an area in which developers are dumping user data into databases in order to build user experiences.
And we end back with the same data silos we had in web2 and web3 basically just becomes a business model. You have a chain to make money. Otherwise your infrastructure is exactly the same as it was before.
Sina [00:07:23]: yeah. It's a very astute observation that this is happening. and yeah, the, the dichotomy is really there between developers who've really bought into the ideals of web3 of having self-sovereign data, pushing power to the edges, all of these sorts of things. And they elect to build products that basically completely forsake user data at the expense of user experience.
Right. And I was reading some of the stuff you've written and you have a lot of these good examples of, you know, you don't get a notification to your email when you're about to get liquidated or, you know, you don't get an email. I mean, just the lack of email communications or, these sites telling you about what's happening with your on-chain behavior is a direct, fallout of the fact that developers basically don't want to even hold a user's email address.
Right. And then, so this is, this is the one hand which people do, kind of hold to what they believe in at the expense of a good product experience.
And then there's the other side, which is, it's kind of taking the convenience of users don't actually care about privacy to like, you know, ultimately like the blockchain is, is, say a global settlement layer for value. And like, we can just do that there and everything else can still use a web2 stack, which is a very narrow definition of this kind of like overarching vision that we've all been thinking about.
Henri [00:08:59]: by the way, the notifications example to me is a really good lens through which to look at the space, because it's an all obviously a clear UX problem that we have in this space. And there are others, there's the issue that, you know, your wallet is not your identity. And so you're trying to log into a product with a given wallet, switch wallets, you know, I have five or six wallets with which I collect the NFTs. And, there's no unified user experience across all of them.
So there's a number of things that fall out, I think from the fact that we don't have private states, we don't have off-chain state in sort of a web3 native way. But I think notifications are a particularly good example because you've got all of these schools of thought playing out into many startups that are going after this problem.
You know, you've got like the XMTPs and the EPNS of the world that are doing, as I understand it, at least on chain native notifications, you've got folks like Notify Network or others that are building messengers on top of, you know, the wallets and public key cryptography rails that we have rather than on chain in a precise way.
And so that the interesting thing for us is we are kind of trying to pick what are the good beachheads in which we can prove out the value that having a sort of web3 native off-chain data storage, private data storage for your users, where can we prove out that value, even though ultimately whether notifications as an issue will be solved through a product like Privy or through any of the companies that are dealing with that and only that is a question to which obviously we have, no, I don't know. All I know is that there's a number of ways in which developers are hemmed in because they don't have off chain storage that they can reliably touch in a sort of non-siloed way.
Sina [00:10:32]: Yeah. I mean, one of the things you've written about, which I wanted to talk about was you asked this question of is the decision between, security and privacy on the one hand. And convenience and user experience on the other. Is this a real invariant of how internet services are built?
Like this is just going to be the case or is it a kind of path dependent thing that has emerged and we've gotten stuck in this like local piece of the landscape that kind of sucks. and then how do you think about this in the context of like the next couple of years with like how web3 evolves. Why is this the time to build Privy, for example, like, why couldn't you do while you're doing a few years ago?
Is it something special about what's happening right now? Is that just a kind of Overton shifts that's happened with like crypto?
Henri [00:11:27]: Well, let me start with the first question, which is that the question of path dependency versus variant? Is there some at least as I've understood your question, is there some invariant in the tension that exists between good UX, convenience, ease of use and on the other side, sort of privacy-respecting technologies and things that empower users.
And I was, you know, I think the way in which to me this question is most urgent today is in the question of custodial versus non-custodial systems in web3, and I guess to define it, the question I think is who owns the keys at the end. If I have assets in a Coinbase account, Coinbase is the custodian.
It is a custodial solution. If Coinbase decides to cut off access to my assets, as they have with say various Russian nationals who are on sanctions list, I can't get my assets back. And the poor part of the core ethos of crypto early on was I am dependent on no centralized third party in order to own the things I own, including my data.
Hopefully even though that was obviously not in the, in the, in the original white paper I think on the flip side, obviously, key management, wallet management, all of it sucks pretty bad as infrastructure today. And so I think there is a tension between the democratization of access to what three and our ability to have non-custodial experiences, to have ones where users are custody of their assets.
So convenience very often means adding a middle. And adding a middleman means being disempowered. So I think part of it is there is tension between the two at a fundamental level where you have to choose between basically doing things in a more manual way, and, and sort of at a cost of UX or doing things, more easily, but at a cost of control,
Sina [00:13:06]: and it's the design space of like different mechanisms. It helps you kind of navigate the trade-offs between these two. So maybe like social recovery wallets are a design that helps you, like, kind of find a middle space between there, where like the user holds their own keys, but you're not putting all the onus on them of “if you lose these keys, you're totally screwed.”
Like, there is a big thin mechanism for helping them recover.
Henri [00:13:33]: Yeah. And I think like Argent, for example, has done a really good job of finding a middle ground here in having what feels like a very, what to native way for me to say, here's my mother's phone number or my brother's email or whatever that I can use to basically do key sharding and have a non-custodial experience where by default Argent steps up saying, we will be the guardian if you nominate no one else.
However, if you don't want us to be the ones that have control, you have a substitute. And I think this is where the, the second part of the, I think answer comes in the more exciting one about path dependency, which is I think the internet maybe, you know, circa 2003, 2004, took a hard left into direction of convenience, which was let's hide all the complexity away from the users and we'll handle everything for them.
And I think there's a special time in crypto – 1, because of the original ethos of the space. But 2, I think because of what's happening on the regulatory side, I think to a large extent regulation is crypto's friend or at least crypto native ideals, because you're having all of these businesses who are refusing to do things that would be easier for businesses for consumers, because they don't want to be on the hook for it.
And so there is a push towards non-custodial solutions not for UX purposes, but for regulatory purposes. And that's where I think crypto is really the nexus in which you're going to have a much better privacy preserving technologies, because there is, for the first time, I think in the history of the web, aligned incentives between the developer who doesn't want to fucking touch user data and the user who wants to own that data.
And so does it fully solve the tension? No. I think the tension will always remain there and I don't think there'll be any silver bullets in data privacy. That tension will always exist, but it opens up the solution space in a way that we haven't had in web2, and on top of that, you've got stuff like zkps and other primitives that are doing an even better job of giving us more tooling as developers to come up with solutions that are both more private and, have better UX.
So to the question of why Privy now, I think that's kind of the answer. I think we're having sort of twin moments where as crypto enters mainstream, it's also getting heavily regulated. And so for the first time, I think there's developers demand for more privacy preserving technologies, maybe in the name of privacy, but mostly in the name of sort of liability control.
And that's a really exciting place to be, I think, as someone working in in privacy, preserving sort of infrastructure and data infrastructure more generally.
Sina [00:15:55]: Right. And you actually had which I’ll plug this now, but you have a series of blog posts that I thought were super well-written. I think you wrote them in 2021 about thinking through the landscape of privacy, how it's evolved over time.
And when you get to this point of regulation and how it's impacting things today, you made this point, which I thought was really interesting, which is again, in the absence of like proper infrastructure that developers can really easily plug into. This is also just going to go into this like weird morass of people complying with the regulation while not actually doing anything.
And you had this screenshot of like an NPR, GDPR, which is like a blob of massive texts in legalese. And like, yes, I agree. And then the No option was show me the website in plain text, which is the alternative that user's given. And I, and I was just in Europe for this like dev connect conference.
So this, this problem is very alive. Like I probably said yes to 10 different GDPR things. And yeah, they're like the privacy regulations, like not actually doing anything.
Henri [00:16:05]: this is where the, what I mean when I say there's no silver bullet for privacy will remain true. I guess maybe two points here. The first is I think web3 is kind of the only place right now, where you can build privacy tooling that is developer tooling and not compliance tooling.
I think if we were building Privy in web2, we would be selling to the general counsels, we would be selling to CISOs. We would not be selling to devs. And I think this is a unique opportunity in web3 also enabled by the fact that every user has their own cryptographic materials, that they're custodying assets with.
Everybody has a wallet, which unlocks new product experiences as we can have elsewhere. So I guess I just wanted to bring this in, which is the fact that I think better privacy experiences online don't just open up sort of the same UX, but more respectful of the users. I think it opens up entirely new UX that we can't even imagine today.
So that's stuff I'm excited to see coming from web3 that I don't think could be coming from web2. I think in the other and the other path like that, the thing I keep going back to is the fact that Cambridge Analytica, that data breach was not a hack. It was Facebook misconfiguring systems.
It was sort of bad systems design and users agreeing to exactly what happened, which is any third party that I'm sharing data with can share my data with. And so I think the onus will always be on developers and app builders to make these decisions. I think we need tooling that makes it easier to do so.
And this is where I think, by not building a protocol, but by building Privy as developer tooling, that is sort of hosted infrastructure whilst, you know, giving the developers the chance where in the end, the end users have chance to host their own infrastructure. They don't want to trust us. I think that's really important is getting people optionality, but by giving an option where we host infrastructure for people, we are sort of getting our hands dirty and helping partake and hopefully setting good defaults for developers to take.
Because that question of say, how do you ask users for consent is a really hard one. And the anti-pattern we're seeing because of GDPR and the banner ads or the sort of consent banners at the bottom of webpages that push you into either like, go fuck yourself or here's our product experience, but it's not much of a choice.
I think that's what, that's what risks happening as well. If we don't design good, good tooling around this.
Sina [00:19:27]: Yeah. It's so bad. Like it's, it's it makes no sense. maybe just describe what Privy is so people kind of know as we continue this conversation and have a, have a way to place it on the landscape.
Henri [00:19:40]: So Privy is a simple API to manage user data off chain. So ultimately Privy takes care of three things for you. It takes care of key management and encryption. It takes care of permissions and it takes care of storage. And so you have two main calls. PrivyPut and PrivyGet. You add those to your front end and using that.
You can basically say PrivyPut, you pass in a key, like a user wallet address. You pass in a value, say your user's email, and you can basically associate an email to wallet address, or associate any data, structured, unstructured videos, images. And so on, to user wallets in a privacy preserving. What that means is when you call PrivyPut, Privy basically encrypts all of that data client-side in your user's browser, stores the cipher text, we have no access to the underlying user data.
And then as you or your user needs it serves that data back so you can build better UX. So you can have, you know, user profiles that actually take both on chain and off chain data to build sort of an experience around that. You know, we talked about the notifications use case and so on, but at its very core, basically Privy is a way to add a few lines of code to your front end so you can integrate sensitive data into your product without taking it down to your stack or for that matter without having to build a backend.
Sina [00:20:59]: Got it. So, let's say I’m a web3 developer, I'm at this fork in the road of “do I just completely forsake user data and have, and not have notifications built into my side,” for example, or “do I just go full web2 and have a client server, you know, thing going on, where there’s this database that I keep” - instead now there's this third path open, which is that “I integrate the Privy API and it gives me two methods, which is like super easy.”
And you can basically store data, like all the complexity, the encryption, the permissioning, like all this stuff is kind of abstracted away through this API. But you basically push user data into this external service, which is Privy. And then at any point in the future through some, you know, key management like Handshake stuff, you can get that data back, for a particular user so this is, this is the premise of how this works.
Henri [00:21:59]: Exactly. And today, the focus is really let's help developers protect their users and protect themselves. Let's make it that if your stack gets hacked you're not leaking user data left and right. Which would, which would just be a harm to everybody. And let's basically help you do better for your users.
And the goal thereafter is to move from sort of a viewpoint of, of right now developer control, meaning the developers sets both the schema - Here's the data I need and the permissions; here's who gets to use it within my company, across my user base, and across other apps - to a place where actually the developer still sets the schema but Privy is under user control.
So the end vision here is to move to a place where the same way that when you turn on Uber, on your iPhone, you get a modal that says Uber would like to access your location. Do you want to enable do you want to give it access? Yes or no?
You would log into Uniswap and it might say Uniswap needs access to your email so they can tell you whether their transaction gets dropped from the mem pool.
Would you enable them access to your email? Yes or no. And then as a user, you have a control center where you can see what dapps have access to, what data of yours, and you can revoke that data access. So this is what you've described is exactly where we are, which again is this sort of developer centric world, because we think making this easier for developers is the first step.
But moving towards a more B2B2C aspect, which is helping developers communicate around data permissions with their users.
Sina [00:23:20]: How does this play out over time? So different developers and, and I definitely do want to get into like how things are architected on your end. Because I think it would be interesting to also just understand how this data is being stored and encrypted and like what's going on there.
But, as a developer, A new user comes in, they authenticate and give me permission to some of their data. Maybe I'm defining this, this schema of how their data is being stored. This data is like then stored inside the Privy data store and Privy is keeping kind of a registry or a list of like all the permissions that each user has given to every application.
And then over time when a new application wants to get a new user's email address, the user, for example, wouldn't have to like put that in again. because if that website integrates with Privy, it would just use the same data store. So you kind of get this effect that we have with wallets, which is a user is kind of like carrying their identity and data between different applications.
Is that, is that right? And they ultimately are in control of like the permissions and the access control around that.
Henri [00:23:15]: Yeah, that's absolutely correct. I guess just to, to, to touch it up one, one bit that the only delta between what you've said and the way the system works today, you know, where if you get Privy API keys today, what we'll actually be running is just that today we are having the developer set the permissions on behalf of the user.
And so sort of the path from Privy today to Privy tomorrow is, is it's sort of a split amongst three axes. At least that's how we're thinking about the product right now. The first is turning things into a user controlled data store. So moving into a place where the user is the one making permissions decisions.
So that's really ultimately a UX question. Apple today, I think does the best job of having like notifications of all modals and everything else that works. And there's a lot of work on the UX front and product designed to make this easy and intuitive, but to also help users make good decisions to inform decisions in a way that say Facebook does.
So that's the first path more user control Privy. The second path is a more decentralized Privy. So in the, in the, in the vision that you shared out Privy is holding the permissions list. Ultimately though we should not have to be trusted. If you don't trust us, you should still be able to use the system where you can nominate another delegate or better yet nominate a network to hold that permissions list on your behalf.
So there's certainly a view to starting to de-centralize our infrastructure, moving from how Privy works today, which is we run hardware security modules that do key management on behalf of users to a place where we actually plug into the user's own wallets. So users are encrypting their data with the keys that encrypt that safeguard their assets.
But then obviously that means when a user is not online, how does someone get access to that data? How do you decrypt that data? And so this is where you'll need some version of a data delegate. To start, Privy is that data delegate. But ultimately you should be able to say, you know, I don't Prust privy at all.
I do trust Sina. Now I'm going to give him access to sort of my data delegation key so that he can -
Sina [00:26:29]: 3rd party data delegates.
Henri [00:26:31]: Exactly. And then down the line, full networks where we can use special-ed encryption in the way that folks like Keep network or Lit protocol are doing in order to have actual full networks act as data guardians and watchtowers in a way where you're not actually trusting a single party.
So this is the move is we're starting with trust Privy to you choose who you trust to you need not trust any single party in order to make this work. So that's a more de-centralized Privy. And then the third axis is a more integrated Privy, which is to say, how do we actually unlock privacy preserving usage of that data within sort of the product tooling.
And so that is, you know, through the inclusion of things like homomorphic encryption, or zkps, once the tech matures or, the right use cases come up. And through things like building proxy servers, where you can actually say, send an email directly from Privy by having, a separate server be spun up where the email is decrypted at an email is sent, and then all that data is sort of squashed.
And so being able to run computation basically on Privy nodes, where again, to start we're hosting those. But the idea very quickly is to say, we shouldn't have to be the ones hosting because in fact we would much rather not. And so let's have a marketplace of basically Privy nodes that run certain computations and you can pick from Henri or from Austin’s email provider where Henry's integrated with MailChimp, and this is how he charges an Austin and agreed to have SendGrid. This is how much he charges. You've basically got these like, data engines that, that, that run for you as a user. This is sort of where we're thinking about this.
Sina [00:28:05]: This is such a fucking huge idea. It's, it's insane. It's really exciting to think about this future, so, okay. So to make sure I understand so Privy today is this like very narrow, simple tool and it's, it's this kind of wedge that is going to open up into a whole world in the future. And this narrow tool today is basically an API.
You call it, you push data to it. And it has a centralized data store backed by a key management system that you're running, that stores only encrypted data and basically pushes, pushes all of the decryption stuff to the clients. And the developer, the developer is your primary user at this point where they're basically defining what sort of data needs to be stored.
And they're also defining what the permissions of what we want from the user. The user doesn't actually get a view into this or a say on this. And then I just feel like it would be helpful for me to retrace what you said, so then there's like these three things. There's a, and maybe we should go into that because I feel like these are each very, very deep rabbit holes.
So the first thing was in terms of permissioning, which is bringing a view similar to on iOS where a new app, you know, apple and makes people say what data they're going to use, or when you go off with your Gmail, it says what this app needs. You basically want to create a version of this where every developer who's integrating with Privy would basically give every application would give a little prompts to the user that clearly explains what they're trying to do with the data.
And the user can make this educated decision about whether they want to go through that.
Henri [00:30:04]: Yeah, no, that's absolutely correct.
Sina [00:30:06]: And that, I mean, that's itself a very hard problem to crack, right? Because you have to kind of think about the internals of these different applications on what sorts of things they're doing with the data.
Henri [00:30:14]: And then how to, how to build, you know, interfaces that are that, that build just the right amount of friction, right? Because the issue is if we build no friction into it, if we allow for this sort of data buffet, then we're back into web2. If we build too much friction, then nobody uses any of this tooling and we're back to web2 as well.
So the sort of how do we help developers keep their users informed? and how do we help users make good decisions, but all of this without destroying user experiences,
Sina [00:30:44]: Yeah, but at least there's an analog for this in the existing world, like models to learn from. And then, okay. The second thread, which is a very interesting one is decentralizing Privy, like the actual architecture. So how, yeah. How does it work today behind the scenes.
Henri [00:31:02]: Yeah. and maybe what what's helpful is I can give you a single sentence that helps summarize all three of the epics. The first epic about user control is who controls permissions. The second sentence is, is Privy custodial. And the third sentence about the integrations is, is the data useful natively and on the second one the short answer is today, Privy is a custodial solution. Whilst from an infrastructure security standpoint, we are non-custodial meaning it is impossible for anybody in our team to actually read some of the data without basically taking over the entire stock, changing a lot of configurations.
Like the entire company has to go rogue in order for things to go wrong. The unfortunate truth is Privy can be subpoenaed in order to hand over user data because of how things are architected today. And we want that not to be the case. We want to move towards non-custodial solutions and for what it's worth.
I think there's a lot of custody theater happening in web3 of like quote unquote non-custodial solutions where you kind of like squint and try and understand how it works. And it is at the end, a custodial solution. We're seeing more and more of that. And, and that's where we fall today.
Sina [00:32:09]: Yeah. Just how, how does privy work today? Like how is it architected under the hood in terms of what is happening with the keys? Where are they being held? Where's the data stored.
Henri [00:32:20]: Yeah. So the data itself is stored today with a cloud provider and namely that's AWS. And you know, that that's a decision that I actually think, you know, in terms of what is the ordering of Privy decentralizing. It's going to be key management and key control first. Well, actually in actuality, it's authentication first, the first step is let us not be dependent on service providers to authenticate users, rather than having to trust a dapp to say, this is in fact, Henry logging on let's have her Henri log on, on his own, thanks to technologies like sign in with Ethereum and other things.
And this is how ultimately we unblock access to sort of a global data store that is yours. And that follows you around web3 is data backpack exists because I don't have to rely on a third party and authenticate myself. So as the first
Sina [00:33:05]: So for first step is user has a data store. That's basically portable across different applications and they can log in with that.
Henri [00:33:14]: Exactly. And they don't need a third party to sort of authenticate their login.
The second piece is around key control, which is a user encrypts their own data and is in charge of understanding who has access over that sort of data encryption and decryption. Today, the way it works is whatever a new customer signs up with Privy, we spin up an HSM, so these servers dedicated to key management. And then for each of these customers users, we have a root key. When a customer calls. And when I say customer, I mean developer. You can see where we're a Saas-y bunch. When a customer calls Privy.Put what happens is Privy’s client libraries, these are open sourced, basically generate symmetric keys, client side in the browser, encrypt the user data under the symmetric keys, and then make a call to the Privy KMS.
These HSMs saying, this is the user for whom I want to encrypt data, send me a new sort of wrapper key derived from that root key that I was talking about.
So I can basically encrypt the symmetric key under which the data was encrypted itself. So I'll repeat that really quick. The data is encrypted under a client side generated key. This is normal envelope encryption, and then the key is used to encrypt the data is itself encrypted using this, other key that sits in, in our HSM and the key point, sorry for the pun, is that every piece of user data is encrypted under a very unique key.
So, if let's say somebody is lying in wait in your server, you have a malware or a breach that only puts at risk, not only just your data as an individual end user, but actually only that piece of data, not all of your data. And so that's really important is a new keys generated every time a new piece of data comes into Privy.
Sina [00:34:56]: These are the symmetric keys in the client. yeah.
Henri [00:35:01]: And then even the wrapper keys are sort of unique to the piece of data that's being stored. But the idea is that that the entire Privy system was built completely modularly. So the permission system, the key management system and the storage system are all modules. And right now the KMS is these HSMs that Privy runs.
However down the line, the point is we can swap that out actually for your user's wallet. So the difference is now is you get a Metamask to pop up that says, would you like to, you know, encrypt that data? And you're the one basically encrypting that data with the API that wallets give us. And so we're swapping out our KMS for your very own as a user.
This is how we sort of decentralized this system over time. The step afterwards is decentralizing permissions. Having the permissions oracle, their permission system, not sit with us, but actually be signed by the user and run on any given note that you want. And then down the line, even run on chain.
The reason you wouldn't want to run it on chain today is because you don't want to reveal permissions. I think me putting my social security number on chain is all sorts of bad, but me saying, I will allow Alice to read it, but not Bob is still not great. And so, the point is you shouldn't have to trust Privy to run the permission system correctly.
You should be able to fork our permissions code and then run it yourself or ask someone else to run it for you if you want. So that's the sort of third thing, but this is sort of logic-gated permissioning, rather than cryptography-gated permissioning, which I can talk a bit more about.
Sina [00:36:28]: okay. So on, on the key management piece, right now the basic architecture is that you’re, spinning up, keys on these hardware security modules that you have. And these keys are used client-side to encrypt these like field-specific symmetric keys that are encrypting the user's data.
And, and then over time, you're basically going to move to switching out this backend KMS key management system that you have with individual user wallets, right? So a user is using their own hardware wallet, or their own Metamask to basically like encrypt these keys that are being created for each data. For each piece of data.
Henri [00:37:22]: That is correct. And I think the idea is to say by default, pre-vis manages this for you. However, if you don't want us to, we absolutely don't need to, you can, you can manage your own infrastructure. You can manage your own key management system and we integrate easily with wallets so that you can do that in a very simple way.
Sina [00:37:41]: Yeah. So how, how in, in the current system, because I mean, there are also many other systems in the world that hold a ton of sensitive data, right? Like, let's say 1password, for example, that's just hosting all the passwords. So there are architectures that people are using for storing sensitive data.
And so I'm curious with this like, model that you have, how would you analyze like the risk vectors of something like this?
Henri [00:38:09]: So I'm going to start with a cheap answer. And the cheap answer is it's already a hell of a lot better than if you're just dumping that data into PostgreSQL or into anything that you're securing on your own, because, you know, maybe the claim I will make is Google and Facebook, are not very good in my opinion, at data privacy.
And I wouldn't throw both of them into the same bucket, but bear with me. However, they are very good at data security. It's amazing how well for the most part they have secure data. And I guess maybe the first, the most web2 version of the Privy pitch would be to say, you've got these huge companies dedicating, you know, teams, a hundred people deep, to data security. Let us do that for you. You, it makes no sense for you to build this in house, but also you should be offering the same level of service to your users. And so at the very least, we are taking this data out of your stack and we are encrypting it on a cell level basis.
And so this is better than if you threw this in your own stack. Even if you, you know, you enable encryption at rest here, the data is end to end encrypted. So, so it's, it's, it's already quite different.
In terms of the security posture, they're sort of two attack vectors that I think are really worth looking at, or three. The first is, malware. So what if somebody is lying in wait in a user's own browser?
And the answer there is, well, it means the data is leaked, but it would have leaked anyways, because the user's typing it in and you probably have some version of key logging or something like that. However, it has no implication on other data stored in Privy you're not revealing anything else about how Privy works beyond what pertains to you as a user and the specific data you've been typing in.
The second one is, our own data store. And so in our data store, we have encryption at rest, but then underneath that we are getting cipher text in and the cipher text is basically the data blob encrypted and the key itself that was used to encrypt the data also encrypted. And if somebody breaks into the data store and siphons off cipher text unless they also have access to the KMS, all they have is basically the encrypted data from the data store.
Now you know, one of the reasons why I think people shouldn't put encrypted data on chain is because ciphers break over time. And there's, it's been known that say the US government or the Chinese government are taking on and storing cipher texts with the understanding that 50 years from now, they might be able to actually break the encryption and read the underlying data.
And so, that is one of the reasons why actually decentralizing the storage in Privy so that you can run Privy on your own nodes. Make sense is because we don't want to. Exactly. But nonetheless, today it's all cipher text. If you break into the Privy data stores, you don't get access to any underlying data.
Now the third piece is the KMS. And so the KMS, the short answer is, if you can break HSMs - today, these are run by cloud providers and insecure facilities. If you can break HSMs, then all bets are off and basically all security guarantees go down.
We, you know, our piggyback, we're not, we're not building our own specifically because we want to piggyback off of the learnings of the last 30 years. What we take care of is we take these bricks, and we take care of plastering them together in a way that the wall comes out solid. But ultimately, you know, the first tool of cryptography is “never roll out your own crypto.”
We're using sort of best in class systems from web2, in order to protect this web3 data. And we were sort of moving through cryptography history as we go, but right now we're in the seventies.
Like we are using public key cryptography. We will move on to using proxy rate encryption and maybe someday we'll get to zero knowledge proofs and threshold cryptography.
But today we're very squarely in the seventies and eighties in terms of the crypto that we use. With that said, obviously we're limited by the quality of the infrastructure of these HSMs that are cloud run, which to be fair, a lot of other services in web3 use.
So maybe we're talking about it a bit more openly, but everybody uses this stuff.
The last piece is the, is the permissions oracle. And I think this is the most interesting one, and this is why this comes next in sort of our decentralization after the KMS is ultimately if you control permissions regardless of whether the key system is working properly, then you control everything.
And so I think that is the trust -
Sina [00:42:16]: You can get the system to decrypt, like different pieces of data if you control that piece of the stack.
Henri [00:42:23]: Exactly. So that's, I think that is the really key part of, you know, having Privy, be sort of user centric, having the user controlled data store. This comes down to how the, permission system is architected,
Sina [00:42:35]: Yeah. And how is that architect?
Henri [00:42:38]: So we're trying to do like cell level permissions. so I guess maybe I'll just describe the model really quickly, which is, a user has fields. The user in a sense is a row in a database. The fields are columns. And the user can basically give requesters who are people asking for access to the data, read or write access to certain columns in certain rows.
So I get to say, you know, and then in this case, this is a developer actually setting these permissions, but it would be the same for the user, which is to say, I get to say, Uniswap has access to my email and, Sound.xyz has access to my phone number. And you know, I want Sushi to have access to my home address or something like that.
But that's how we're setting these and so today that's how it's working and basically, we're using the existing sort of infrastructure to enforce the existing cryptographic infrastructure to enforce these permissions,
Sina [00:43:32]: Yeah. And since this is so interesting. I like this term that you're using of cell based permissions, which is, I imagine it means like any cross-section of a row in a column, right. Of a person on a piece of data and who has access to it.
And so you're using your existing cryptographic infra to secure this like table, basically that keeps all of these permissions together.
It's as if you're, you're, a user of Privy yourself, is that, is that one way to think about it? It says, if you are a developer storing this table of permissions on Privy.
Henri [00:44:10]: Yes, but ultimately Privy right now is the one controlling that like permissions table. So that's the part that we need to change, which is to say today you are trusting, Privy to enforce permission such as you've set them. And this is the part where it will become really important, to make it that Privy cannot lie about the permissions that you've set for it.
And so there's a number of solutions around this in terms of how we evolved the crypto system to do that. But at the simplest level, that might also just mean the developer signs the permissions that it sends over to Privy. So Privy, you know, sort of justifying the permissions decisions. It makes, I cannot lie about what the developer asks in the first place we can't spoof and say, well, no Sina did give us access to this because we have a signature that shows that you didn't.
And so, you know, maybe I'm going to get back to a product feature level. The first step we're taking on. Moving towards trustlessness, which is a term I hate as it happens, but, in moving towards
Sina [00:45:09]: Why because it's because it's on a spectrum usually,
Henri [00:45:16]: Exactly. I think trustless implies that there is such a thing as a trustless system and there isn't, it's a question of what trade offs you're making.
But in moving towards the trustless end of the spectrum, the first thing we're going to do is basically build auditing logs so that users can verify how has their data been used. And so, you know, if you zoom all the way back out now to the top-level system, what this means is a privacy policy that doesn't suck instead of going to a website and seeing, Hey, here are ways in which we might use your data and types of data that we might collect about you.
You're seeing, Hey, here's the exact data we have on you. And here's how it has been accessed over the last 30 days in a nonspoofable way. So that's, that's how we're thinking about basically moving and walking that line towards trustlessness. Let's start by making these systems verifiable and then let's hand over control to the people that should, so long as we build sort of infrastructure that makes it easy for them to control it, rather than it being the fuck you, that GDPR is sort of pushing onto developers.
Sina [00:46:11]: Wow. That makes a lot of sense. That's, that's a, that's such a cool idea of you can go to a place and see every time a third party has accessed a piece of your data, because anytime that they do that, there's this like cryptographic handshake that's happening. And there's a record of this.
And there's this, this idea of basically transparency is a disinfectant type thing. Of just seeing what's happening. Just making that exchange transparent will have very positive downstream effects.
Henri [00:46:48]: And I'm going to maybe pop this up to form more philosophical question, but I think to me, this is where say The DAO hack is such an interesting thing. I think, you know, looking back on it now a few years after the fact. To me, the truth of it is less so, code is law, which obviously didn't end up being true, but it's the transparency and the systems allow us to make better decisions, by being a disinfectant, by forcing us to have the tough conversations that we wouldn't have to have in opaque systems.
Sina [00:47:17]: That, that all makes a ton of sense. at a high level, I think we've talked about the key management and the encryption and my very hand-wavy takeaway is that you've thought through your shit.
Henri [00:47:33]: Yeah.
Sina [00:47:34]: this is just kind of like what it's all starting with and it's it's going to further decentralize over time.
Henri [00:47:45]: Yeah. And I actually, you know, this is something I hadn't thought as much about before I got into Privy, but the sort of two orthogonal questions of who should control and how can we ensure that the system is doing what they think they're doing? Those are two orthogonal questions that I think are often lumped together, but this is why for us sort of decentralizing Privy, meaning giving you a reference to is that the system is doing what it says it's doing is sort of at a separate epic from who actually gets to like, you know, type in settings for the system and make decisions around permissions.
Sina [00:48:16]: Yeah, And so the, the third piece of the puzzle, which we haven't talked about is storage.
Henri [00:48:22]: Yes. And frankly, I think it's the least interesting. So now I duck because of my Protocol Labs days, which were, I spent a lot of thinking about storage. But in a sense, this is maybe the hot take I have, which is, I think there's a bit too much decentralization zealotry in web3. I think decentralization is awesome. And I think certainly the ideals behind data sovereignty and ownership are central to web3 means to me.
But I think, you know, I have too many conversations where people ask me, but is it on chain. I'm like, but that's not the right question. And so, you know, to me, the question of who controls the system is deeply important and decentralization has a role to play here.
And like decentralizing infrastructure is really important there. But I think in terms of storage the threat model we have is - data leakage. Is, will your data end up all over the internet with, without you wanting to, it is not censorship resistance, which is to say can a third party withhold data from you.
And so to that end, actually using centralized storage, I think is a very sensible decision that we're making, which is to say we're putting the data in the cloud it's encrypted end to end. And the question is who controls the encryption key? Down the line, obviously we want to enable you to plug in your own sort of data module.
So if you don't want to use, our cloud account, if you want to plug in your own sort of virtual private network, or if you want to, sorry, virtual private cloud, or if you want to use the server in your basement, or if you want to use the IPFS network you should absolutely be able to do all of these things.
However, I think the sort of clear and present danger is about data leakage, more so than censorship resistance. Maybe the trite point I can make here is a lot of what people want to store with previous off-chain data, meaning real world data. And the honest answer is I know my SSN, even if company X refuses to serve me my SSN, I still have it.
So there's a path back from that. The real fear though, is that everybody else also knows. Obviously we want to tackle sort of data silos by giving users control over who can access data again, I think it makes sense to start with centralized storage here.
Sina [00:50:32]: so you've basically, you're making a judgment that although censorship resistance is important and an ideal that we all aspire to, real like, threat vector for this sort of data, is not censorship resistance, but that data just leaking and being plastered all over the internet and then again, there's like the point of comparison of like what our people are doing already today?
What are developers doing today? And they're just putting it on their own servers in a Postgres database. Right. So that's like the point of comparison. But I mean, yeah, I liked the point you make around censorship resistance of, a lot of this data is stuff that the user knows and can basically re-input into the system on demand or like kind of connect the dots another way.
And does that kind of imply that you envision Privy being primarily used for, this like identity level data, or like at the intersection of a person and like an application rather than, I'm building a decentralized messenger and I need to store like every message in the system in a private way.
Henri [00:51:49]: Yeah, no, that's a really, really, really good question. And the short answer is certainly this sort of identity and identity cross app intersection is where we started off our thinking. We're doing a lot of work now around specifically messengers, decentralized social networks, and you know, maybe the notion of a data pod where you know, I think there are applications out there - I guess I'll name check another that I really like – like Farcaster who have a really good model around like sufficiently decentralized data stores where the social graph lives online. And then I think we think a lot like them, or they think a lot like us, which is to say by default, they give you infrastructure to manage as it were your own tasks.
But if you don't want them to be the ones managing it, you have an ability to swap out the storage for something you control. And so the short answer is, hopefully we will do all of these in time, however, to start with, it seems to me like the bigger threat is the censorship resistance with -
sorry, the bigger threat is the data leakage –
with the one caveat, which is to say that interoperability is extremely important.
So we need to give users a way to port their data from web3 service to web3 service.
Sina [00:53:03]: got it. And so the takeaway being that on the storage piece, there's basically a server set of servers that are holding all of this, all of this encrypted data there's cloud providers. And the fact that that data is encrypted using all of these mechanisms we've talked about is the real value prop at this point.
And over time, you know, that potentially leaves the vector of like censorship resistance. Like, I don't know if the us government wants to like, do something to make this data inaccessible, they maybe have a vector towards doing that. because you've built this system in a modular way where the three pieces kind of plugged in together, you can just kind of change how the data is stored while it remains interoperable with the rest.
Henri [00:53:52]: Yeah. And I think there's a really interesting question here around, and this is something we thought a lot about at PL as well, which was like how do we help build a system that emphasizes not just decentralization in so far as the entities running services, but actually emphasizes decentralization in terms of the underlying hardware on which the stuff is run.
And you know, to some extent I would, I would question, you know, it's one thing to put your data on a peer to peer network, but who is running the peer to peer nodes. And basically how much, if your threat model is the U S government, how do you know that the peers that you're storing data on are not also running on a cloud.
And so this is the distinction between, I guess, economic decentralization, who are the actors making decisions around this versus infrastructure decentralization. And I agree with you, I think down the line, the, the core step, if that is your, your, your threat model, if your threat model is getting, having a government come after you, then you should run all of your own info.
Sina [00:54:55]: Yeah. Yeah. yeah, and there's a lot of decentralization theater and a lot of fuzzy thinking that conflates like the different layers of the stack and what's actually happening and just calling something decentralized. Whereas I mean, it makes me think of Balaji’s kind of exploration and posts back in the day where he was talking about a metric for decentralization and looking at the number of independent client implementations for different blockchains and the number of developers on each of those, and where the nodes are being run. And really it's like the level of decentralization is the weakest link in that stack in that chain.
Henri [00:55:34]: But this is where, and frankly, this is where I'll come in with my most libertarian ideal, which is at the end of the day, the only solve for this is, you know, transparency and user choice. I don't know that we can predict how decentralization will turn out and basically on what sort of layers of the stack the centralization will play out well and are what layers it'll play it a little fall short.
I think the best we can do is build a system that is composable so that, as the world evolves, we give our users optionality and we allow them to tune the system so it fits their preferences and their risk models.
Sina [00:56:10]: Yeah, and Yeah. And the world is evolving. There's a lot of very smart people working on the different pieces of this puzzle. And so building in a modular way, lets you kind of plug into these new things as they develop.
Henri [00:56:24]: Yeah. And I think for what it’s worth, maybe I'll go on attention for one second. I think this is one of the most exciting things that I've been seeing is just the amount of work happening around self sovereign identity and the amount of work happening around web3 data. And I think we're addressing a given segment of it through this notion of secure off-chain data storage as a compliment to on chain storage, but there's a lot of composability networks and folks doing fantastic job work there. And one of the things I'm so excited to watch play out is really how this stack will come together. The ISO layering really hasn't sort of, landed yet.
And I think we're sort of in the state post big bang where like matter is plasma and we're starting to see granules of planets forming. And so I'm, I'm excited to see what the ecosystems ended up looking like and how, as a builder in this space, I build a stack that includes user data in what three, you know, five years from now.
Sina [00:57:17]: Yeah. And so I think at this point in the journey of web3, I mean, if you're building a product slash protocol, which a lot of people are, you know, Uniswap is both, you know, it has a front end and then it's smart contract system that's running behind it. You could use Privy very easily through these two API to store the user's private data and bake it in a way that is much, much more secure than you doing it yourself. And that leans into all of the composability that comes from that, that, that it's this user's particular data, that in time they'll be able to kind of like tie it directly to their address and, yeah, I think it's a very worthwhile path for people to think of.
Henri [00:58:11]: Exactly. And I think the idea of down the line is, is you can have an honest conversation with your users with delightful UX about their data, where you actually look them in the eye and say, this is how things are being handled. And, not have to worry about the fact, you know, this is sort of maybe, the building against developer guilt, the number of developers I've talked to who are kind of, you know, wincing a little bit, knowing, ah, I could be doing better for my users but I'll figure it out once I have product market fit.
And then that turns into, I'll figure it out once I'm done scaling this product and it ends up never happening and completely screwing over both the products that they've worked so hard on and more importantly, their users.
So I think the idea is can we give you tooling that allows you to add its base and at its core, do a much better job on your own, but also sort of bring users back into the loop so they can exercise control over their own data decisions.
Sina [00:59:03]: Yeah. How, how do you think about how such a developer would run analytics or think through how their product is being used? Like try to glean information and insights from that.
Henri [00:59:19]: Man, that is such a good question. And one that I have only the beginnings of a response to, I think data analytics and privacy preserving analytics and web3 is a huge space that's been underexplored. And I think it exists across, you know, levels such as like public intelligence, what I might call like, which is like the whole block explorer space that I think we've only started scratching the surface on, I think telemetry and understanding actually, how did the peer to peer networks work?
And I think MEV and this is a weird statement to make, but MEV to me is a version of blockchain telemetry, which is what happens in the dark space between the mempool and the transactions and what happens on chain. I think the same thing is true in terms of who's running nodes, where are nodes located.
There's this fascinating thing. I think a couple of years ago, where there was a huge power outage in a region of China, I forget which, and and Bitcoin's hash rate went down by sizable fraction. And as the first time we were like, oh, that's where there were, those are being run. And I think like that's fascinating.
And then the third area obviously is data analytics for end use. Understanding who is accessing my protocol from my website, from my mobile app through partner integrations, into my smart contract, which partner integrations or through my smart contract directly. And the short answer is I think Privy as a key value store, as a encrypted by default key value store, has some answers to bring there.
However, I think you need more abstractions on top of such a key value store in order to build sort of easy-to-use analytics for developers. It's not our focus today, but you know, maybe shout out to everybody listening. If you are thinking about data analytics for web3, I think we would love to help you build because doing this in a way that respects users and their privacy is extraordinarily hard. And I think extraordinarily worthwhile, so developers can keep building better products.
Sina [01:01:13]: Yeah. Yeah. There's, it's almost like we need an entirely new stack built on a new, fundamentally different architecture of like how data's like flowing in an application.
Henri [01:01:30]: Completely. Sort of across clients on and off chain systems, but then there's the, you know, I feel like every path leads back to Rome and this case, Rome is identity, there's also the question of how do you define a user, right? My 0x one, my Ethereum address, or if I log in through Solana, can I actually link my Solana wallet with my Ethereum wallet?
How do I do that? This is something Privy has thought a lot about. And actually we have a data linkage a wallet linkage API that allows you to link multiple wallets together in a privacy preserving way.
But then beyond that, there's a question of, do I actually want that linkage to be made public and to whom?
I have a different DeFi identity that I have an NFT identity. And how should that be taken into account by analytics providers. Ultimately, this is frankly a whole bag that so far we've not touched where I think again, there's just really interesting things to be built but it's a bit outside of our purview.
Sina [01:02:23]: Yeah. And again, what I kind of like about this approach is that. It's starting from the point of view from the place of where is there usage today? Like what are specific applications that needs something like this today? And then it kind of backs into this larger vision, because I think a lot of people, myself included for a short period in 2018 have thought about these questions of like identity, like reputation, like, oh, and, you know, it just doesn't work if you try to approach it in the abstract and design this like magnificent system that all the pieces are going to plug into, because one, there's just like a lot of uncertainty in how all of it evolves. And then two, you kind of need to get adoption and like systems in the real world just evolve from, you know, individual threats that have grown in complexity over.
Henri [01:03:22]: And I think this is, you know, to get spicy and, and double down on what you said. I completely agree. I think most developers don't wake up in the morning thinking how do I solve identity et cetera, maybe those working on identity solutions. I think most developers wake up in the morning thinking how do I build a really delightful tool for my users and how do I, you know, solve problem X?
How can I send notifications to my users. I really don't want to touch this, you know, PII and this information that puts my users at risk. And so that's the problem we're trying to solve for today. And frankly, this is why, again transparency is maybe be sort of a core thing that we're going to, that we're trying to build with in mind, making sure the systems are auditable before they're decentralized making sure they’re auditable.
So that as we make mistakes and it's, it's too complicated, the space for mistakes not to be made we, we can be called out and we can improve. And ultimately the system is, is open source. So that if you don't like the way we're running it, you can run it yourself and we make it easy for you to do so.
Sina [01:04:23]: Yeah. What's your personal experience of working on Privy? Been like. Are you enjoying it? What, what has been particularly surprising or strange about this journey so far?
Henri [01:04:38]: So the biggest delight I get, I think is working with my co-founder Asta. She comes from the self-driving car world. And so we were both thinking about a lot of the same sort of data infrastructure and privacy problems. I was thinking about it in web3, she was thinking about it in a more web2 lens, and merging forces and working together on this in a web3 native way has been such a delight.
And I think the really good compliment it's like we, you know, we have a zealot and a convert and it's been a lot of fun to see sort of web3 through new eyes. And at this point I would say she's a, she she's a grizzled veteran, but like, you know, even say new wallets, I started using new mobile wallets like last month.
Cause I realized, fuck, there's all these wallets that exist now that I just didn't know about. And I have my old system with like my hardware wallet and my like way in which I do things that I developed in like 2017. And I now realize, wow, this space is like light years away from this. And so maybe that's, you know, first the delight of working with us, and the ability to see the space sort of with new eyes as I dive deeper into what the UX of the space is like.
And then, one of the things I find really interesting going back to the conversation is that tension though, between sort of a protocol level thinking and product level, thinking in web3 and I just, I find it very easy to sort of fall back on what I know from, from Protocol Labs in thinking about things at a systems level, rather than at a, what is the problem I'm trying to solve for developers right now?
Sina [01:06:16]: Totally. And I think that's one of the unique strengths that you have is that you can kind of fluidly go back and forth between these two worlds. Whereas most people are kind of like primarily approaching the world from one lesson,
Henri [01:06:30]: I hope you're right.
Sina [01:06:33]: And this problem of grading your own wallets. That is one that I definitely resonate with. And I think I'm squarely stuck in the 2017 paradigm and has had it on my, to do list, to think about how to do this. So I feel like if you write a blog post around that, you'd make a very small subset of people incredibly happy.
Henri [01:06:56]: I've been, I've been thinking about doing this actually for a little bit, which was like, here's a very simple action and let's run through it with these six different wallets and compare experiences. So you've given me the push. I need to do it.
Sina [01:07:08]: Yeah. I'm curious when it comes to your co-founder Asta. Cause I, I also read this blog post that you’ve written around questions to ask your co-founder and like, and it was like a number of years ago and you’ve founded a company before you've been thinking about these sorts of things for awhile.
How would you describe your relationship together? What could you share about it?
Henri [01:07:35]: the first thing I'll say is, I think there are no good rules here. So whatever I'm about to share is, is, is basically useful in so far as it's informed my personal journey. But one thing I'm a bit allergic to is the sort of modeling out of relationships are like, this is what it takes to be a good co-founder that like Silicon valley is so fond of.
So, you know, ultimately teach their own, but I guess I'll talk about what our relationship has been like. I think one of the things we did really well early on was investing in trust and in communications. And so when we met, we weren't really, we were both, I think, on our way to building a company on our own and we just really liked each other.
And we're, I, at least I was extraordinarily impressed with her and like of, of, of her. Intellect and, like hunger and all of these things. And I was like, fuck, I really want to work with this person. I will be better. And whatever I build will be better -
Sina [01:08:27]: How and why kind of the context did you meet?
Henri [01:05:31]: we got an introduced cause we were thinking around the same space. So a common friend basically put us in touch saying you both thinking about data privacy, you should chat. Maybe some point you'll want to work together. And we had a first call and I hung up and I was like, oh no, like we need to have a second call ASAP. And we basically ended up talking every day for like a week or two before saying, well, let's, let's get serious.
Do we want to build this together? And then she lived in SF at the time, I lived in New York. I flew out there and we basically spent a week together doing what I would call like hardcore therapy. Like we were doing walking and talking and working through all of the things that are really not fun to talk about.
And I think what I've, what I've learned is. In the fire in the heat of the moment when you're working on the product, it can be super easy to have everything be about the product and kind of forget the means of communication that you have with your co-founder or with your team. And so we just invested upfront, not working on product, but really talking about how do we talk about things?
How do we disagree? Like if I say this, is it hurtful? Am I being a Dick right now? Or like, but, but building up the trust, that means that today it is, I think really easy for us. To hop into a, you know, a channel after a meeting and saying, oh, I didn't like this. Or I really like this, but it never feels personal in a way, like we're talking to each other because we've built this sort of communication pattern that allows us to be very earnest with each other.
And that just saves us a lot of time. So I guess maybe the best thing I can say about our relationship beyond her being amazing in general is the fact that it feels very safe. And I think that sense of psychological safety around your co-founder means you can actually focus on the fucking product and on your users, which is what you should be focused on.
Sina [01:10:17]: A hundred percent agree. And the, the level of trust you have together is it, it allows you to communicate with much more efficacy. You can communicate and knowing that this person knows that you're coming from a good place,
Henri [01:10:36]: Well, and there's no, there's no 3d chess happening. I think we're both checkers players when it comes to communicating together, which is, you know, just saves a lot of time. The, she brought into this, this cultural point, so she used to work at Aurora. And I guess a fun fact about Privy is that most of our team today comes from self-driving.
We've got people from Cruise, and Aurora, and Nuro, it's, it's, it's really fun. I know nothing about ML and self-driving so it's really fun to be a part of it. But one of the things that she brought that I thought was really great, sort of point of culture to have at a company is assume that the best in others, when people come with an idea start off assuming the best.
And I think that that's something that, that we've been, we've been working with internally. That's been super, super.
Sina [01:11:24]: Yeah. kinda to this, to this week that you spent together talking about things. Cause it's probably the most important decision in the lifetime of the company, like who, who you're starting it with. And I'm curious, like were you talking about these like meta-level points of like, how do we communicate or, you know, my sense has been that you can kind of only really get a read on these things by going through an actual experience together and seeing how it happens in practice.
So like working on a project together is a very good way to, to get a sense for the underlying principles that this person holds too. But I'm curious if you've found an effective way of like unpacking the root data itself..
Henri [01:12:14]: The short answer is that I agree with you. and we weren't just talking about meta level stuff, to be honest, we also just put the sort of guts of company building upfront. What should equity split be like, what should titles be? What will each person be working on? And actually like talking about all the very uncomfortable subjects up front, and maybe it was helped by the fact that we just didn't know each other before had we been close friends.
And I think this is one of the pitfalls is when you work with a really close friend, you already have a language with that person on which you work as a co-founder. And sometimes it is the same language as the language of like, you know, working together. And sometimes it's not, but it's very easy to actually mistake one for the other and under-invest in the hard conversations when working with friends.
And so, maybe it was helped by the fact that we were clearly here to figure one thing out, which was - do we want to work together? Do we want to sort of saddle each other's sort of product aspirations to one another? And you know, I'm a deep cofounder romantic, but I think ultimately the moment you take on a co-founder, the product's not really your own anymore. The product is like, you're, co-parenting it with whoever you work with. And that, like, obviously it's a huge amount of trust in someone. So some of it was product conversations. Some of it were tactical conversations around say, fundraising.
Some of it were sort of guts conversations around equity ownership, ways in which decisions are made and so on and so forth. And then I think maybe my last thing is I, I really liked the image of exponential backoffs in linked lists in computer science.
And like, to me, there's sort of an exponential backoff as you work on an idea. And as you work with a partner, which is initially, you know, commit a day, you know, after that first call we had together, we were committing to having a second call and then maybe after the second call, we were committing to another two and after the next to about another four, and basically by the time we flew out, we were committing to let's try and make this work for two months.
And then, you know, at some point you kind of forget about the exponential backoff and you're just doing it. But I think sort of incrementally longer commitments is kind of the way in which I found not to overthink things at the end of the day. I don't need to, you know, sign in blood that this is all I'll be, you know, doing forever, the moment I need someone. That trust can come over time. It just needs to have enough structure that we're going through, the tough things upfront.
Sina [01:14:37]: Awesome. Well, we've been talking for a while, so I think this a good place to close. Thanks so so much, man.
Henri [01:14:45]: Thank you so much. Thanks for having me on. And yeah, I love, I mean, I love all of this. I'm really thankful to be a part of these things. I also, I don't know if this is a thing you're allowed to say, but I think the opening jingle is fucking dope. So very excited about that as well.