Librepunk Podcasts

logo for trash cat tech chat

001 - Talking about trust

from trash cat tech chat

13 May 2022

Listen: ogg | mp3

Show Notes

trash cat (they/them) and Juliana (she/her) have a rambling discussion about how and why we trust.




trash cat: Do you have the sound effects for the thing? So it did the like, [imitates two instances of the Mumble notification sound overlapping]

Juliana: Yeah.

tc: [laughing] Okay.


tc: You're listening to trash cat tech chat, a Librepunk podcast.


tc: Let's start out by introducing ourselves, right? So, um, I'm trash cat. My pronouns are they/them.

J: I'm Juliana. My pronouns are she/her.

How and why do we trust?

tc: Cool. And I'll kind of set up the topic for today. So, what I was thinking about was where does privacy come from? In the sense of like, we use the internet, which is obviously not a very private platform in general. But like, we use the internet to share information, but we want to do that selectively. We want to have privacy while using the internet. So how do we accomplish that? You know, there are services with like, good privacy policies and stuff that say like, "We will respect your privacy. We will not share your information with other companies. We won't analyze it ourselves to market to you." or whatever. Right? So there's privacy through policy where you have to trust that the company will adhere to that policy and doesn't have any weird loopholes in it and whatever. And then there's kind of privacy through technical measures where we use encryption or something to say, "Regardless of what your policy is, I'm protecting my data in a way that you simply don't have it to abuse." Right? And that's sort of the framing that I want to... or, that I think about with this. But, there is no true, you know, 100% "trustless" approach to privacy. Right? Everything comes down to trust in one way or another. So, maybe you don't have to trust that company that directly processes your data or whatever, because you're protecting it from them, but then like, there's, you know, you have to trust that the encryption works. Which, maybe you understand it well enough to analyze it, but like, you have to trust that the mathematics and cryptography community haven't figured out some way to break it that they're just not being public about. You have to trust that the software is good and that you get it from a legitimate source and whatever. So... I don't really know where I wanted to go with this, but I just kind of wanted to talk about like... how and why do we trust things?

J: Yeah, and I think that's a really important question because obviously it's the basis of this like you were saying. You know, you can have a privacy policy, but are you actually going to adhere to it? And that's a difficult question, especially for, I think, lay people, and especially now when privacy is such a selling point for so much stuff.

tc: Well and like, even for Facebook, right? Like, Facebook has said, you know, "We're the privacy company" or whatever, and it's like, well, obviously that's not true.

J: Yeah, exactly. Like, "Oh, we bought" -- Well, so Facebook's thing is that they bought WhatsApp, and then they're using its encryption protocol for all their chat platforms, so for Facebook Messenger, Instagram Messenger, and obviously WhatsApp itself. But like, I don't really trust Facebook to actually end-to-end encrypt things.

tc: Yeah, and regardless of the end-to-end encryption, there's still a lot of other data. There's all the metadata that they can collect on people's communication.

J: Yeah. So, how would you evaluate a company's trustworthiness, do you think?

tc: I don't know. I mean... some of it has to do with what values do they claim to have, and do those seem to be what they actually adhere to? But it's really hard because fundamentally they're unknown entities, right? And I mean, that's in general the problem with using the internet, is you have to use third-party infrastructure that you can't fully trust 'cause you don't -- there's no -- even if there's transparency, there's no real transparency to it. Right? Even if, like, the server source code is public, there's no proof that that's what's running on the server or anything. So, I don't know. Yeah. [laughs]

Backdoors and laws

J: And I think this might be a good time to discuss "Thoughts on Trusting Trust" [Editor's note: The essay is titled "Reflections on Trusting Trust"] by Ken Thompson because it further complicates this question. And I guess to explain for people who aren't familiar with it--

tc: That includes me, by the way.

J: Oh, you've never-? Okay, it's an excellent paper, and I suggest you read it. It's pretty short, just like 2-and-a-half-ish pages. But the gist of it is -- it comes from a talk that Ken Thompson gave, and then later he wrote it up as this paper. And the gist of it is that in the, I guess it would have probably been 1969 before UNIX was really known outside of the people making it, Ken Thompson built a backdoor into the C compiler so that every time it compiled the source code for a UNIX operating system, it put his password and login information in it. And then, if it saw that it was compiling a C compiler, it put the code for that C compiler to do the same thing in it. So for every commercial version of UNIX up until this talk was given, Ken Thompson's account was secretly part of it, and he could log into any UNIX system on Earth.

tc: That's really interesting. I had no idea that was a thing.

J: Yeah, it's pretty bonkers. And I didn't realize at first that he'd actually done it, and I was talking to a friend about it, and I was like "Wow, was this like, a hypothetical? Like 'What if...?'" and they told me, no, this actually happened. Like, this was a thing that was done. And Ken Thompson wasn't even a malicious actor, you know? He just did this because he thought it would be fun. He was playing around with friends, and he thought this would be a fun thing to do, and there's this gaping security hole that no one knew about for like, I don't know, 20 years or whatever.

tc: Yeah. That's incredible. It reminds me of the, oh geez, what was it called, like Swiss AG or something? [Editor's note: It was Crypto AG.] Where the... now I'm gonna not get the government agency right. I want to say the CIA, but one of the three-letter agencies in the U.S. sort of, like, was the shadow operator of this Swiss company that made all the encryption hardware for the world and had a backdoor in it. And so for 20 years or something, I want to say it was more than that, but I really don't remember the details on this, the U.S. had a backdoor into most of the encryption around the world.

J: Oh wow.

tc: Yeah. Because it was making all the encryption devices. [laughs] Right?

J: So when people were freaking out about China putting out 5G infrastructure because "What if they build a backdoor?" what they were actually worried about was something the United States had already done?

tc: Yeah.

J: You love to see it.

tc: Yeah. And I mean, the U.S. has been pushing for backdoors for a long time, right? And now we have the U.K. campaign about, what do they call it? "Going Dark". That campaign, pushing for either abolition of end-to-end encryption or backdoors in it. I don't remember the details. And Australia has its whole thing. It's a mess.

J: Australia?

tc: Yeah, Australia has a law, or... don't quote me on the details here... but Australia passed a thing a few years ago [Editor's note: I'm talking about the "Access and Assistance" bill.] that... it says something like "Any Australian company must be able to provide plaintext copies of user data" or something like that. I think it effectively bans Australian companies from providing end-to-end encryption for services that they run, basically.

J: That's deeply disturbing, especially as we're entering like... Mind if I get just momentarily a little political?

tc: No, go for it.

J: We're seeing a rise of fascism across the globe like we haven't seen since the 1930s, and so governments that maybe people once trusted or, I don't know, believed in having these powers is frightening because even if states are good actors (which I personally don't trust them to ever be, but) even if they were always good actors except when they were fascist, some of them are going to be fascist at least for a little while, and they shouldn't have that power.

tc: Yeah.

J: So is there anything that can be done about especially a state-level actor building in these backdoors? Would you just have to avoid anywhere where these laws apply?

tc: I mean... From what perspective? Are you talking like, you want to make a security product or are you talking like, as a user?

J: I was thinking more from a user perspective, but that's a good point. If you want to make a product too, I guess that would be even more complicated.

tc: Yeah... Um... So, one of the amazing things, right, about the internet and technology is that we can share data. Right? And we have free and open-source things that we can, you can evaluate them. I mean, not all of us, including myself, have the technical ability to do that, but we can have things with transparency, right? So that helps a lot, being able to look at things -- as a community being able to look at things and say, "Yes, this actually does what it's saying it does. It doesn't appear to have any backdoors or whatever." Sometimes I think it just becomes, you know, don't use... I mean, like, people talk about the Five Eyes a lot. I mean, especially VPN people talk about them, right? But people talk about the Five Eyes a lot, and like, "You want to avoid products from this country" or "service providers from these countries" because various laws would enable them to get your data or whatever. Which... that relates to the discussion of trust as well. You can have a company that says, "We respect your privacy. We'll only give away your data when legally compelled." But then you've gotta look at okay, under what circumstances -- even if you fully trust this company, right? -- under what circumstances can legal compulsion happen? Right? Like in the U.S. we have a law [Editor's note: I'm talking about the Electronic Communications Privacy Act of 1986 (ECPA).] that says -- I don't remember if it's specifically emails or data in general, but if you store data on a public server, right, like not your own, for more than 180 days, it's considered "abandoned", and the police don't need a warrant to get it. They just have to like, write a letter that says "This is relevant to an active investigation."

J: Wow. I did not know about that.

tc: Yeah, it's a law from like 1995 [Editor's note: 1986] when everyone just POPed their emails from the server and deleted them, right? The idea when it was written of people storing data long-term on servers was just... not a thing. But today it absolutely is. That's the primary way that people do things, right? So like, even if you trust your provider in theory, there's still legal compulsion. They still have to follow the law when it works against you. And like, that's something to bear in mind as well.

J: Yeah.

tc: It's a complex ecosystem. I don't know.

J: Yeah.

tc: I don't know. I think it probably helps some to use providers that are based in a certain [country] like people talk about Switzerland a lot, right? And ProtonMail talks about Switzerland a lot of course 'cause... that's where ProtonMail is based.

J: Indeed.

tc: And I think that makes sense for that reason for example. You know? In the U.S., your emails are generally not private. I mean, they're generally not anyway because everyone uses GMail. But even if you use, you know, a great U.S.-based email company, they still have to obey the law. Versus, maybe the laws in Switzerland or another country are better. But then that gets into this whole like, how can you expect people to know the law? I don't know the law! No one knows the law, right? It's so complicated to look at global "What are your rights?" in every single country or whatever, right?

J: Yeah, and just to give an example of how complex the law is, the United States doesn't have an up-to-date copy of its code of laws available. The master copy is literally a stack of paper locked in a wooden box somewhere, and it takes a few years for new laws to actually be digitized and made publicly availab-- well, in the context of the internet age publicly available. And then imagine amplifying that across every single country. It's a massive task.

tc: Yeah. So then a lot of it, I mean, speaking pragmatically as a person, right? A lot of that process becomes "Well, what do other people say?" Right? Like, "People on the internet say that Switzerland has good privacy laws. Is that where I should look to for whatever?" And like... maybe but that's not a great approach to things, right? That's a lot less rigorous than I would like it to be, but it's really hard or impossible to be more rigorous, so... what do you do as a person? Right?

What providers/services we use, self-hosting

J: So maybe it would be a good time to discuss what we as individuals do. Mine will probably be shorter, so I'll go first. It's kind of ironic that I'm on a show about privacy because I'm notoriously bad at privacy. I have a VPN that I use because my ISP blocks some sites, which is weird, and also for other completely legitimate purposes. And that's really about it. 'Cause I'm sort of a child of the internet, so I had a Facebook account since I was like 12. I've had a Google account since I was like 8. These big corporations probably know more about me than I know about myself, up till maybe 3 or 4 years ago when I started moving away from them. And what I wanted to say about that: The VPN I use is actually run by someone I know. I don't know them personally in real life, but they're in my sort of internet circle, and they operate the infrastructure for another service that I use. And they just happened to mention one day that they have this company where they have an end-to-end encrypted Matrix server, they have a VPN, and they have an encrypted email provider. So I set up the VPN, and I'm probably going to be using their other services too at some point.

tc: That's cool.

J: Yeah, it's lucky that I have that connection to a person.

tc: Yeah, for sure. I am that person in my life. I do a lot of self-hosting and stuff. I don't host my own email because email... is just incredibly cursed. I do run a Fediverse server, but it's not the one that I actually use. I use Librepunk because I like that community. I like Puffball. And I don't want to run full-on Mastodon. What I run is GoToSocial, and GoToSocial isn't, like, ready to use, yet, let's say. So I'm using it, but it's not the main thing that I'm going to be running. And I don't like Pleroma. [laughs]

J: For good reason.

tc: Yeah. But I run my own XMPP server. I run my own Matrix server. I run... I mean, I have like, websites and stuff that I run as well. So, I like to do as much self-hosting as I can. There's some irony in that, I think, because... So, like, self-hosting is nice because you're in control as much as possible of your data, and you're running things yourself, and that's great. But it's also... There are certain privacy things that you can't really -- that are incompatible with self-hosting. Like, it's hard to, say, run a server from your own home and also have a high level of anonymity, for example. Right? There's some trade-off to be made there. You know, when you're self-hosting, your ISP can see all of the -- all the domains that your server talks to, and if it's like, just you on that server, then that's what you're doing. [laughs] So, like, put a different way, I use Tor a lot of the time. I use Tor most of the time when I'm not interacting with my own services that I run myself. But then when I am interacting with my own services that I run myself, they're talking directly to other services that I don't control, and my ISP can see that stuff. So, I don't know. There's some stuff there. And then I also use encryption a lot when I can, and, yeah. I don't know.

J: Yeah, it's a tricky situation. There's also the question, to go back and touch on legality, if you're hosting a server in your home, you're subject to whatever laws are active in your country. And that's obviously, as we've discussed, in at least most Anglophone countries, it sounds like, the laws aren't great.

tc: Yeah. [laughs]

J: But on the other hand, if you're doing a VPS, a virtual private server, right? you're using someone else's computer, and suddenly there's a new link in the chain of trust.

tc: Yes. And that's... I don't know, that's why I'm not a fan of running things on a VPS. And I actually, the way that my network is set up, I do have a VPS as a part of it, but it just relays traffic; it doesn't decrypt it. So I terminate my own TLS; I just, because of networking restrictions from my ISP, it is necessary as like an extension of my ISP sort of.

J: So I'm guessing you have physical hardware wherever you are, and then you relay all of the traffic through your server through a different VPS?

tc: Yeah.

J: That makes sense.

tc: Yeah, but it just forwards IP packets. It doesn't decrypt anything. So, that's where we're at. But then that leads into things, you know? So, say the VPS provider is malicious, right? They can record information about -- there's metadata that's available about not what what people are doing necessarily, but they could record which IPs connect to that VPS IP, right? They can... The DNS for my domain points to the VPS, so while it's true that I terminate my own TLS, it would be absolutely possible for the VPS to set up its own TLS certificates and get them signed by Let's Encrypt or whatever, and it could hijack my site if I didn't have control over it. So like... I don't know, threat modeling is weird, right?

Threat modeling and free/open source software

J: Yeah. And of course, threat modeling is extremely important. We have a mutual friend who likes to say whenever there's like a new privacy technology or whatever, "what's your threat model?" You know? Like they'll mention the blockchain in particular. It's like, okay well it's secure from, you know, attacks X, Y, and Z, but what about like an ostensibly legitimate transaction that also steals a bunch of Bitcoin or whatever.

tc: Yeah. I think I was the one who said that. [laughs]

J: Oh, maybe.

tc: That last bit. I said something about that, about like... you know, the blockchain is highly secure as people say, but what that means is it has this very strong integrity property that says you can't go back and modify it. But like, how well does it protect against person-in-the-middle attacks? Right? That's not the thing that it's actually good at. [laughs] And that's the thing that matters, right, the "Can someone else steal my money?" property. That's the one that actually matters, not the "Can someone go back and rewrite the chain?" Like, people don't really -- That's not a thing that people understand and care about.

J: Yeah.

tc: That was not an important rant, but... Yeah, like, threat modeling. [laughs] Understand what's important and... yeah.

J: So, how would you -- Let's say a non-- We're both fairly technical. I'm a programmer, and you do a bunch of self-hosting and privacy stuff, as you've discussed. What would just, you know, a non-"computer toucher" as we like to say, how would they figure out their threat model and maybe try to find resources for protecting themselves where they're most vulnerable.

tc: So... I don't have an easy answer 'cause I think threat modeling is just not easy, right? But I think there are some specific things that... Some people have very specific needs, right? Some people have, you know, "I am being harassed by someone" or stalked or something, or "I have this specific concern, this specific need for this specific type of privacy." I think a lot of us don't have that, don't have a specific thing, but we generally feel like privacy is a thing that matters, and mass surveillance is bad, and other, you know, general ideas like that. Right? Would you say that's fair?

J: Yeah, I think so.

tc: So I think if you have a very specific need, then you threat model around that, obviously. But if you're just, you know, "I'm just a normal person." (I'm not speaking as myself, obviously. I wouldn't call myself a normal person.) But if you're just like, "I'm just a normal person, but I want to be better about privacy" or whatever... Where I think it's most helpful to start is to look at mass surveillance and say, "How are we being surveilled, and how can we fight back against that?" And I think as, you know, an average Joe, everything that you can do helps. It may help your own privacy personally, but regardless, it helps contribute to a culture where privacy is normalized, where mass surveillance becomes harder, it becomes more expensive to do. And I think that's really the goal. So, things that I would personally prioritize are things like using free and open-source software, as opposed to non-free/proprietary/closed-source software. And like, saying that, there's a generalization in that statement, right? It's not automatically true that if it's -- if the source code is public that it's safer or more private. But as a trend, it usually is.

J: Yeah.

tc: And if you're not technical enough to, you know, feel like you are able to make that call, I think that's a good kind of rule of thumb. I would say, using end-to-end encryption. That's something that I value a lot, the ability to just have conversations and not have those go into some database. With the conversation about free and open-source software, getting away from big tech companies that we all know gobble up everyone's data. If your concern is government surveillance, especially in the U.S., but I think generally globally, if your concern is government surveillance, much of that happens via corporate surveillance. Much of that is companies collect data on you, and then your government gets the data from those companies.

J: Yeah.

tc: So I think that's where I would start, is looking at those kinds of things.

J: So moving away from big tech companies and looking into end-to-end encrypted communication platforms?

tc: Yeah. And using free and open-source software when you can. You know, Linux is great.

J: Yeah. And just to mention, the reason that ironically having all of the source code publicly available tends to mean it's more secure is that you have a bunch of eyes on it. And not only that, you have a bunch of nerdy eyes on it, and nerds love to be right about things. So if some security expert is browsing the source code and sees a hole, they're going to tell someone.

tc: I mean, and also like, the culture is just completely different. I'm not speaking from experience. I'm not this kind of security person. But what I've heard from people who work in security is with big companies that make all this proprietary software and whatever, what tends to happen is if you discover a vulnerability, a security vulnerability, and you try to do the responsible thing, and you go to them and say, "Hey, I found this problem", they say, "Okay, shut up about it, or we'll sue you." Right? And they don't fix the problem; they just, like, they try to issue you a gag order essentially. [laughs] It's like, that's not -- that's not a healthy security culture.

J: Yeah. Whereas in free and open-source software, if a project had a vulnerability reported and refused to do anything about it, people would raise a stink and stop using whatever that project is making.

tc: Yeah. It's just, I mean, in general it's more open. It's more transparent. And with that transparency, it's harder to... it's harder to hide things intentionally. Like it's harder to intentionally implement a backdoor. Not saying it's impossible by any means! But it's harder. There's more verification. Or, there's more potential, at least, for people to look at it and find issues.

J: Yeah.

tc: You asked me that question. I wanna know what you think. What would you recommend to people in regards to threat modeling?

J: So, "Think about what groups would have an interest in hurting you" I think is my personal starting point. I am a transgender woman in the Deep South. I live in Alabama at the time of recording. And so a major threat, physically and digitally, for me is transphobes, is people who want to do violence to people like me. So that does at the time of recording, again, tend to be private actors. So it's a little easier. I use aliases. I use encrypted email. I don't share personal information online, and I don't really use major tech platforms. So that's probably fine. Unless someone is really motivated, it's unlikely that anyone who finds out I'm trans through the internet is going to find me in real life. Yeah, I think that's a good place to start.

tc: Yeah. You touched on something that I think is really important to mention, which is, you said, "unless someone is really motivated". And I think it's really important to say, there's no such thing as total security or privacy. There's no such thing as 100% safe. But -- well, and that's why we need to look at, that's why we threat model, right? That's why we need to say, "What are we concerned about, and how can we mitigate realistic threats?" Not "How can we make everything perfect?" because you can't! A highly motivated attacker will succeed, right? As a general thing.

J: Yes.

tc: If they have enough motivation and enough resources, whatever you do will not be enough. But most of us are not gonna be targeted by highly motivated, highly equipped attackers. So, yeah.


J: I feel like a lot of people when they hear "private chat", they're probably gonna think of Signal. But are there reasons Signal should not be trusted?

tc: So... [laughs] That's a whole thing. I have a bunch of issues with Signal, and some of them are trust issues, let's say, and some of them are not. So I'll try to put aside the ones that are not trust issues like "It's centralized." Okay, who cares, right? It's encrypted. That's fine. But trust issues... One of the big things with Signal is -- at least has been in the past... They may have improved some of this over time? So, not positive. But um, the use of... Like, Signal tries to integrate well into the operating system that it's running on. So that's either iOS or Android. (There's also a desktop version of Signal, but it's not a standalone program, so I'm gonna talk about iOS and Android Signal.) And so that means it uses push notification code from iOS -- from Apple and from Google. At one point in time, there was a blog post that I read and didn't fully understand, so again, take this with a grain of salt. And this was from like 2017 or something, so they very well may have fixed this. But there was a blog post that went into "Here's how Google Services are incorporated into Signal on Android and why that gives Google potentially a backdoor into the app."

J: Oh no!

tc: And I think the thing there, I think, was the specific implementation was... push notifications on Android were not... let's say quote-unquote a "backdoor". They let Google see metadata about who's talking with whom, right? But they don't allow Google to like, download and run arbitrary code. But the way that Signal at the time had done Maps integration (and I think the reason that it did that was like, "Send someone your location") but it pulled in some Google library that Google, because it runs Android and because of its relationship with that platform, had the ability to download, in the background, arbitrarily download some executable that isn't verified by the user as trustworthy in any way. I mean, essentially if you have a Google version of an Android phone, Google has a backdoor into the phone, right? But the thing that gave Google kind of backdoor into Android was integrated into the Signal app. From what I understand from this blog post that I didn't fully understand. Right? So, there are things like that that are concerning like "Does this give Google (or Apple or whoever) too much power over it?" And then there are things like, you know, you can look at where does Signal's funding come from? Okay, so this is like... convoluted string of things, but from 2013-2016, Signal was given something like 3 million dollars in total by a group called the Open Technology Fund (OTF). The Open Technology Fund is... it's like, reorganized, and it's currently at least supposedly an independent organization, but that was in 2018, I think. It was after this period had ended. So I'm gonna talk about historically at the time that this money was being given to Signal, right? Signal was given about 3 million dollars by OTF. And OTF is like a child project of -- there might be another step in-between, I don't remember, but it's like -- it's downstream from Radio Free Asia, which is a propaganda project that the CIA started in... a long time ago, I don't remember when, but, like, several decades ago [Editor's note: 1951]... to publish anti-communist propaganda in Asian countries. So maybe there's enough, like, whatever there, but basically, Signal was given a bunch of money by what works out to the CIA. So like, should be suspicious to us? And we can look at why! And we can say, the CIA has a legitimate vested interest in this privacy tool existing. Tor also, the Tor Project also gets a lot of funding from OTF. The CIA has a vested interest in these privacy tools existing because the CIA wants them to be used overseas to spread, let's say pro-U.S. or pro-capitalist or whatever propaganda in countries where maybe that's not permitted legally, or maybe that's not safe to talk about. So it's an anti-censorship thing where they want to be able to have channels of communication where people can talk about these ideas that maybe are forbidden by their governments. So like, that makes sense as a reason that the CIA would want to fund projects that it actually wants to be effective. Right? So maybe that's not the most suspicious thing ever. But maybe it is. And the other thing I think about with this, with Signal in particular, 'cause Tor... is this decentralized project, right? Tons of people around the world run Tor nodes. Signal is a centralized thing. So the thing that I think about with Signal with it being funded by the CIA for the purpose of enabling people in other countries to talk about things that the CIA wants them to be able to talk about... That is compatible with a U.S. backdoor. Right? Saying, "Google can read the messages, but foreign governments should not be able to." Those two ideas are, at least theoretically, compatible. And that's what kind of concerns me, is I think Signal is set up in a way where it can legitimately be effective overseas but not here.

J: Yeah. And it's interesting you point out that the idea of having secure communications for people in other countries that the United States can just spy on is okay goes back to trusting your government. In the United States especially, I find that ironic because the entire system is designed from the assumption that government cannot be trusted. Like we have the three branches of government so that they don't get along, so that the government is not effective, and therefore so that it cannot be as oppressive as easily.

tc: Yeah. In theory.

J: In theory. In practice, obviously, democracy is very fragile.

tc: Mmm. Yeah. I wouldn't probably call what we have here democracy, but that's...

J: Yeah... Yeah, I have a degree in political science, so we could go into that, but yeah, it is not technically democracy.

tc: Yeah. I don't have a degree in political science, so I'll let you speak to that, not me, but...

J: Yeah, with my whole bachelor's... [laughs] It's not democracy. That's all.

tc: Yeah. But yeah, like, so... there are things that I'm concerned about when it comes to Signal, let's say. When it comes to Signal and trust. But also, like, compare Signal to WhatsApp or [laughs] iMessage, right? By the way, side note for anyone who doesn't know, iMessage's security is ridiculously bad. That's why I'm throwing it under the bus there. But like, WhatsApp ostensibly uses the Signal Protocol, uses the same thing as Signal does. I trust Signal a lot more than I trust WhatsApp. I don't like Signal. That's where my threshold is. I don't like Signal. It's below that. But WhatsApp is way down there because it's proprietary. The source code is not public. We can't study it. We can't audit it. Because it's run by Facebook. Right? There was the recent FBI leaked document thing that talks about "What information can the FBI get from different messaging apps?" And there's very little information that the FBI can get from Signal. Now, some of that, it should be said, is because Signal as a company simply chooses not to collect it. And that's the first type of trust, right? The privacy by policy. We are trusting that Signal doesn't store metadata about who's talking with whom. (And yes, Signal has its sealed sender thing and whatever, but still.)

FBI document on messaging apps

tc: And then there's... I want to talk about another thing there, sorry.

J: Go for it.

tc: So... because I brought up this document, I want to address a specific thing that I've seen people saying, which is "Telegram is good because the FBI doesn't get any information from Telegram." And I want to be very clear: [laughing] Telegram's security practices and privacy practices are not good; the FBI just doesn't have legal jurisdiction over the company that runs it. And like, they have the luxury of not being compliant with requests from the U.S. government. But that doesn't mean that you should use Telegram as your encrypted messaging app.

J: Right. Was this the same document that revealed that some companies don't -- which, this may have just been known before this, but -- some companies don't even require legal compulsion? Like I think Facebook and Apple are the ones. They'll just, if the police are like "Hey, could you just give us information?" They'll just give it to 'em.

tc: Yeah. Um, I mean, that was already known information, I think, but the document did highlight some of that, like [laughing] What was it? It was like, "Facebook" -- or, "WhatsApp, we can get..." You know, like, message content is encrypted, right? They can't get that. But then, like, they can get user data, like metadata, at regular intervals. It's like every 15 minutes or something they can get updates or, it was something like that.

J: And if you can see what someone is doing, even if you don't know exactly what it is they're doing, but who they're interacting with, what services and stuff they're interacting with every 15 minutes, I mean, that's stalker levels of information.

tc: Yeah. And it also talked about in that document the whole like, "Messages are encrypted, but if they have the backups enabled, we might be able to get 'em from that." Which... ugh, Apple. This doesn't really relate to trust, probably, but for the record, just because we're talking about iCloud Backup... Apple had plans to encrypt iCloud Backup, or iCloud... yeah, encrypt people's backups in iCloud. There were plans in, I want to say, 2018, which, then the FBI talked to Apple, and then Apple dropped those plans. So. Hm.

J: Yeah, little suspicious.

tc: Yeah.

J: This is another way in which free and open-source software kind of has some extra trustability built-in, is that most of the time free and open-source software is not, um, at least not managed by a company, so you don't have the same sort of, I guess you'd say, pressure to play nice with the government because there's no profit motive. You know, if the government doesn't want to do business with them, okay, they don't care. You know? That's not where their money's coming from.

tc: Yeah.

Trust on the Fediverse

tc: So, Librepunk, the Mastodon instance I'm on, is down right now because of reasons outside of my control. And I needed to message you last night to talk about planning for this, to talk to each other. So I have an account on another instance in another name, completely divorced from my identity as "trash cat", and I messaged you from there and said "hi, this is trash cat. librepunk is down. here's the context, what's going on." And so I think that that's something that's worth talking about, like, how do we trust people online who say that they are certain people? And it also, I think, extends into a bigger conversation that I've been thinking about for a long time, like since I first joined the Fediverse, which is, it would be really easy to create an account with someone else's profile picture and their username on some instance that they don't have an account on and say, "Hi, I am this person." Right? Like, how do we kind of verify identity on the Fediverse? And I think mostly the answer is we don't, right? But I want to know if you have any thoughts about this. How do we trust that people are who they say they are?

J: So, there are two prongs I want to take in response. The first is the direct instance you're talking about because it's kinda funny. I was actually hanging out with some other friends in a voice chat, and I asked them, but I was like, "I just got this message from this account at this domain saying they're trash cat. Does that sound right?" And they could say, "Yes, I've seen this domain. That does sound right." But more generally, trusting people online is a really interesting question because as I've mentioned, I'm a child of the internet. I grew up online. I've used pseudonyms probably... in more raw social interactions in terms of time than my real name at this point. So... yeah, I don't know. I think for me the way it works is that I'm not interested in who a person is in the legal sense, like I'm not worried what's on their driver's license or their birth certificate or any of that stuff. I'm worried about who they are, kind of as a person. And so it's just like in real life where you come across someone, and you begin to build a relationship, and while you're building that relationship, you might be a little bit more wary about what you say, or you might maybe even probe a little bit to bring up topics to see "What are their values? How do they feel about things that are important to me?" And then over time, you can develop trust not based on some sort of external definition of "Oh, this is trustworthy" but on a more personal and human level. And then from there, you do, like I said, talk to friends or have verified channels of communication that you can verify that people you trust are talking to you from a different avenue than usual. And on the Fediverse... This actually came up the other day, at least on Mastodon, which is probably the most popular Fediverse software, for better and for worse. [laughs] The founder was -- so, for those who don't know, Mastodon is kind of, sort of a federated, open-source Twitter clone, and the founder of the project did this intentionally. He's consciously trying to cultivate that sort of microblogging platform. And obviously something that exists on Twitter is the check mark, where, you know, from what I understand, Twitter asks you to send in a government ID to verify who you are. Well, we don't have that on Mastodon. What we do have is PGP signing and the rel="me" link. I don't use this, and I don't know if I fully understand how it works, but I know the gist. And the gist is you put out a PGP public key, and then you have an HTML element on your web page that has a link with that key, and you set up your profile so that it verifies the key when it checks your profile. And if the specific link that is supposedly verified is in fact verified with that key, it turns green.

tc: Okay, so I do want to clarify that there's not, like, a signing that happens there, at least from what I've seen. So like, if I list my website, for example -- say I'm at, and I list in my profile, I can get a green check mark saying I'm at -- actually I run, but there doesn't have to be any type of like, cryptographic signature going on. It just fetches the web page and checks "Is that <link rel='me'> thing there?"

J: Yeah.

tc: I just wanted to clarify there's not cryptographic proof per se; it's just checking, like, that a website says this thing.

J: Yeah, I'm looking at it now. I think I got confused because I'm in a lot of free software spaces, and there's a lot of overlap between free software and privacy for reasons we've discussed, and so a lot of people use that feature to verify against a Keybase of some kind.

tc: Yes, that makes sense.

J: But yeah, so I mean, on the Fediverse, that's really our only way to verify in-channel, I guess, that someone is who they are saying they are. So far.

tc: Yeah. And that, I think, is, um... There's an interesting thing with trust there as well 'cause... Don't quote me on this, but I think that that check is done server-side, not client-side.

J: Yes.

tc: So what that means is, you're further trusting that the server that you -- I mean, you could obviously click the link, view source, and check what is the rel="me" link there? -- but just by seeing it in green, you're trusting the server, um, the Mastodon server to tell you that this person is verified, rather than verifying it yourself.

J: Yeah.

tc: So, yeah, there's that. And then, yeah, we really don't have a lot of that. And I think mostly that's okay... And interestingly, surprisingly, it's mostly not a big issue... which is genuinely surprising to me. I would expect there to be a lot more people who are pretending to be other people. And maybe there are, and I'm just fooled.

J: I was just gonna say, the reason that it mostly works is because there is a concerted and near-universal (at least in the circles I'm in) community effort to verify this stuff, to make sure people are who they say they are, or to find out -- you know, a big thing is bad actors -- find out if a new account is an old bad actor.

tc: Yeah. But I'm just thinking like... It would be really easy to take, especially someone who was not ever on a FediBlock list, was not ever identified as a bad actor, just chose to leave at some point -- it would be really easy to choose one of their identities and just assume that identity on the Fediverse. Right?

J: Yeah.

tc: It's harder, I think, to impersonate someone who's already there because they can say "I am not this person." And like, we see that on mainstream social media, right? Where someone will say... like, popular people will say, "This other account is pretending to be me. This is a scam." Right? But I think it's... when people aren't there to speak up for themselves, it would be very easy to imitate them, and I... I just think about that sometimes.

J: That's a really good point.

tc: Yeah.

Social trust and key trust

J: So, I'm noticing a trend in what we're talking about in regards to trust where there's like a base of trust. You have to assume trust in someone or something and then kind of build off of that.

tc: Interestingly, this relates to a thing in cryptography. So, there's a big problem in cryptography, which is -- so, I'll try not to get super deep into anything here, but -- we have this thing called public key cryptography where we have our characters, Alice and Bob. Alice and Bob want to have a private conversation with each other. That's how most cryptography stories start. So, Alice has 2 keys: a private key and a public key. And Bob has 2 keys: a private key and a public key. And they work in this asymmetric way where the public key can be used by others to encrypt messages for you, and the private key can be used by you to decrypt those messages. And so, you share your public key with everyone, and you keep your private key private, is the basic idea. So there's a very important question in cryptography, which is: Alice and Bob exchange public key. How does Alice know that the key that she gets (these keys are just random numbers in the computer) -- how does Alice know that this key that she gets is actually Bob's key? And it's a really important question because if it's not Bob's key, someone can launch a person-in-the-middle attack, and so, like, Mallory, our attacker, can go to Alice and say, "Hi Alice, I'm 'Bob'." And she can go to Bob and say, "Hi Bob, I'm 'Alice'." And then, Alice sends a message to Mallory, believing Mallory is Bob. Mallory decrypts it, re-encrypts it for Bob, and sends it to Bob. Bob believes he got it from Alice. Right? So it's really important that we know that keys are authentic, but it's a really hard problem. And, this is just this whole big issue in cryptography. "Public key infrastructure" is the thing to read about if you want to know more about this. But one of the approaches to this is called "trust on first use". And what that means is Alice accepts the key that she receives from Bob-or-whomever and says, "I will accept that this is Bob's key. I will not allow this key to change." And so, whomever it is that she's talking to initially, she will always be talking to that person. She won't allow someone else to come in and say, "Oh, I'm Bob now." And I don't know why I went through that whole explanation. It relates in my head to this idea of "You start with some initial trust, and then you extend it", but I don't know that it's actually helpful information to go anywhere with. But there you go.

J: Yeah, it is interesting, and I think it also ties back into the whole community aspect again, right? Because this is a very human way to verify things. You assume when you meet someone and they tell you [their] name that that's actually their name, and that they're not pretending to be somebody else, that they're being honest about, you know, who they are, their values, whatever. And so, that's just kind of the human way to do it. And I don't know why I was trying to think of this, or what made me think of this, but when you encrypt a Matrix room, the way you verify the key is that Matrix pops up -- well, in my experience, when I've done it, it has a few ways, I think, but one way it does it, it pops up a list of emojis. And you are theoretically supposed to look at the other person's screen and see what emojis they got and make sure it's the emojis you got so that you know there's no person in the middle. But in the age of the internet, I've never been in a physical space with someone I talk to on Matrix, so it's just like, well, I mean, I guess this is the account they gave me, so I'm just gonna assume this is them and verify even though I can't actually verify.

tc: This is one of the reasons that I think, um -- This is a different conversation about cryptography and private messaging, but this is one of the reasons that I think we need, like, a multi-layered version of how we understand key trust. And like... no one wants that, I feel like, because any time that you add more complication to some encrypted system, it means fewer people are gonna use it [laughs] right? Like, any time there's friction, people are not going to like it.

J: Yeah.

tc: But it is materially different to say, "I have met up in-person with this person, and I've verified that this is the same key" (or "We already had an established secure channel, you know, verified in-person, and I got this key from there, so I'm just extending that trust") -- That's materially different from "I verified with this person through some insecure side-channel" like "I sent them a text message or an email" or something that theoretically could have been compromised but is more secure than not verifying, which is materially different from saying, "I have not verified this in any way, but I will trust that it's correct", which is very materially different from saying, "I do not trust this." Right? So I think that there are at least 4 different trust levels that need to be different, that need to not blend together because they are... because like, you don't want to say "I fully trust this" if you don't fully trust it, but you also don't want the two options to be "I haven't verified this at all" and "I've fully verified it", right? There needs to be some in-between.

J: Yeah, especially in the age of the internet, as more and more relationships are gonna be, you know, "We've never been in the same physical space."

tc: Yeah. I don't know. It's all... it's all a whole thing. [laughs]

J: Yeah.


tc: You've reached the end of this episode of trash cat tech chat. Check out the show notes for links and other information. This podcast is licensed under a Creative Commons Attribution-ShareAlike 4.0 license. Music by Karl Casey @ White Bat Audio.

More info


Music by Karl Casey @ White Bat Audio