A fascinating but ominous software story dropped on Friday: a widely used file compression software package called “xz utils” has a cleverly embedded system for backdooring shell login connections, and it’s unclear how far this dangerous package got into countless internet-enabled devices. It appears the persona that injected this played a long game, gaining the […]
Looking forward, I’m still more-worried about the fact that state-backed threat actors are targeting open source projects via this social engineering route than the technical issues.
I think that the technical issues that the attacker used can be addressed to at least some degree.
If autoconf is a good place to hide stuff, maybe shift away from distributing autoconf-generated files in the source tarball; I don’t know all of the problems that come up with that, but I understand that at least some of them were backwards-compatibility-breaking issues across autoconf versions in the past.
If maintainers are putting up bogus tarballs that differ from what’s in git, have github disallow this, and for projects that are currently doing it, have github figure out how to address their needs otherwise. If this can’t be done, at the very least, highlight the differences.
The ifunc hooks can probably have some kind of automated auditing added, if they’re a favored vector to hide things.
If, in a project takeover, attackers try to embargo any reporting of security holes and ask that nobody but them be notified, it’d be possible to have some trusted third-party notified.
If threat actors try to inject attacks right before freeze – I understand that Ubuntu may have been a target – then apply greater scrutiny to those changes.
If distros linking code into sshd with distro patches is exposing sshd to security threats that the opensshd team isn’t looking at, disallow that practice in distro policy for some set of security-sensitive projects.
Systemd may be too ready to link libraries into things that don’t require it.
Maybe it makes sense to have a small number of projects that are considered “security-critical” and then require that they only rely on other projects that are also security-critical. That’s not a magic fix, but it maybe tamps down on the damage a supply-chain attack could cause. Still…my suspicion is that if an attacker could get code into something like xz, they could probably ultimately, even with only user-level privileges, figure out ways to escalate to control of a system. I mean, all you need is a user with admin privileges to run something as a user anywhere with their account. Maybe Linux and some other software projects just fundamentally don’t have enough isolation. That is, maybe the typical software package should be expected to run in a sandbox, kind of the way smartphone software or video game console software does. That doesn’t solve everything, but it at least reduces the attack surface.
But the social side of this is a pain. We don’t want to break down the system of trust that lets open-source work well more than is necessary…but clearly, there are people being attacked by people who have a lot of time to spend on putting together tactics to attack them. I’m not sure that your typical open-source maintainer – health issues or no – can realistically constantly be on their guard against coordinated social engineering attacks.
The attacker came via a VPN (well, unless they messed up) and had no history. The (probable) sockpuppets also had no history. It might be a good idea to look for people entering open source projects who have no history and are only visible from a VPN…but my guess is that if we rely on reputation more, attackers will just seek to subvert that as well. In this case, they probably committed non-malicious commits for the purpose of building reputation for years. If you’re willing to put three years into building reputation on a given project, I imagine that you can do something similar to have an account lying in wait for the next open source project to attack. And realistically, my guess is that if we trust non-VPN machines, a state-backed attacker could get ahold of one…it’s maybe more convenient for them to bounce through a VPN. It’s not something that they absolutely have to do.
But without some way to help flag potential attackers, it just seems really problematic from a social standpoint. I mean, it’s a lot harder to run an open-source project if one is constantly having to think “okay, has this person just spent the past three years just building reputation so that they can go bad on me, along with a supporting host of bogus other accounts?” I’m not sure that it’s possible, even for really paranoid people.
That’s also how the most damaging attacks on proprietary software work. At some point all organizations need to trust their members and co-workers need to trust each other - I can’t think of a way to be more miserable at work than having to second guess everyone around you.
That’s also how the most damaging attacks on proprietary software work.
Yeah, supply chain attacks can happen. There was that infamous SolarWinds supply chain attack recently. But I think that there are some important mitigating factors there.
Proprietary software companies – unless they’re using something open-source like xz upstream in their supply chain, as it’s not just a “proprietary software world” and “open-source software world” – tend to have someone’s personal information if they’re employed by them. They’re not gonna hire and pay some random name who they know only as a GitHub account through a VPN, certainly not make them maintainer of their software.
Many – not all – proprietary software companies mandate that employees work locally. I’s likely that if I’m working for a US company, a person is also subject to US law enforcement. In contrast, if you have a state-backed group, they’re probably targeting people elsewhere. Whoever the people from the Jia Tan group are, my guess is that it’s good odds that they will probably aim to avoid being in a country that they are targeting. Even if we expose their identities, they probably aren’t going to be directly-impacted by law enforcement. Open source projects hypothetically could do that, I suppose, but normally they’re pretty border-agnostic.
That is, I think that this is going to be specially a challenge for the open-source world, as the attacks are targeting some things that the open-source community is notable for – border-agnosticism, a relatively-low bar to join a project, and often not a lot of personal identity validation.
At some point all organizations need to trust their members and co-workers need to trust each other - I can’t think of a way to be more miserable at work than having to second guess everyone around you.
Yeah, that’s kinda what I was thinking, but you put it more-frankly.
It seems like there’s a lot of potential for this to be corrosive to the community.
Maybe Linux and some other software projects just fundamentally don’t have enough isolation. That is, maybe the typical software package should be expected to run in a sandbox
Some of it is that a lot of desktop software paradigms weren’t built to operate in that kind of environment, and you can’t just break backwards compatibility without enormous costs.
Wayland’s been banging on that, but there’s a lot to change.
Like, in a traditional desktop environment, the clipboard is designed so that software packages can query its contents, rather than having the contents pushed to it. That lets software snoop on the clipboard.
What’s on the screen and a lot of system state like keys that are down and where the mouse pointer is and so forth wasn’t treated as information that needed to be kept private from an application.
Access to input hardware like controllers aren’t linked to any concept of “focus” or “visibility” in a windowing system. That may-or-may-not matter much for a traditional game controller (well, unless you’re using some system where one inputs a password using a controller), but modern ones have things like microphones. Hell, access to microphones and cameras in general on laptops isn’t normally restricted on a per-app basis for desktop software. From microphone access alone, you can extract keystrokes.
I don’t think that there’s a great way to run isolated game-level 3d graphics in a VM unless you’re gonna have separate hardware.
Something that I’ve wondered about is potential vulnerability via Steam. None of the software there is isolated in a “this might be malicious” sense – not from the rest of the system, not from other software sold via Steam. And Steam is used to distribute free software…I haven’t looked into it, but I don’t think that the bar to get something into Steam is likely super high. And then consider that there are free-to-play games that have to make money however they can, and some of that is going to be selling data, and some of how they do that may be to just offer to run whatever libraries with their game the highest bidder offers. How secure are those supply chains? And on Steam, most of the software is closed source, which makes inspecting what’s going on harder. And that’s before we even get to mods and stuff like that, which are from all over the place.
I mean, let’s say that random library from ad company used by a free-to-play game is sending up the identity of the user on the computer. It has some functionality that slurps in a payload from the network telling it to grab credentials off the existing system, and does so for ten critical users. Would anyone notice? I have a really hard time believing that there’d be any way to pick up on that. Even if you wanted to, you can’t isolate many of these games from the network without breaking their functionality, and there’s no mechanism in in place today isolating them from the user’s storage or other identity information.
I own IL-2 Sturmovik: 1946. It’s published and developed out of Russia, and the publisher, 1C, has apparently even been sanctioned as part of general sanctions against Russia. Russia is at war with Ukraine, and we in the US are supplying Ukraine. 1C runs a lot of software on user computers and can push updates to it. If the Russian authorities come knocking on 1C’s door and want a change made to some library, keeping in mind 1C’s position, are they going to say “no”? Keep in mind that what they say may determine whether the company survives an already-difficult environment, and that they may have no idea the full extent of what the state has going on. Now, okay, sure, probably – hopefully – there aren’t US military people or defense contractors running IL-2 Sturmovik directly on critical systems. But…let’s say that they run it at home. How carefully do they isolate their credentials and home information on that system? Does their home machine ever VPN in to work? Is there personal information – such as access to personal email accounts – that could be used for kompromat on such systems?
I’ve managed to get some Ren’Py software (no 3d requirements, normally limited access to input hardware required, one common codebase for most functionality, can generally use one’s local Ren’Py engine running games instead of using the binaries provided, all favorable characteristics for sandbox) running in firejail (and in the process, discovered that one of the games I had was talking to a chat channel…this was described in the source as reporting numbers of users, and the game is a noncommercial effort, but chat channels have been used for commanding botnets before, and even if it’s not malicious, if it can do that without attracting attention, I’d very much expect that malicious software could do so). That is about the extent of my attempts to really sandbox games, and even with that very limited and superficial effort, I already ran into something that I’d have some security concerns about. My guess is that there are a lot of holes out there, even if unintentional.
As things stand, Valve and similar app store operators have no incentive to isolate what they distribute, so if they do so, it’s out of some kind of general sense of responsibility to users. Users generally don’t have the technical expertise to understand what the security implications of Valve’s decisions are, so they can’t take that into account in purchasing decisions. We could mandate something like strict liability to Valve and other app store vendors or maybe OS vendors in the event of compromise – that’d probably make them introduce isolation for software that they distribute. But there’d be some real costs to that. It’d make games more expensive. It might make it harder for smaller “app stores” like gog.com, itch.io, or Lutris to operate. I use Debian. Debian doesn’t cost anything, and if you put the Debian project in the position where it may be legally liable, they’re gonna have to charge for their OS to cover those costs. With charging probably comes DRM. With DRM probably comes restrictions on what one can do with software, which smashes into problems with open-source software. It’s definitely a problem.
Looking forward, I’m still more-worried about the fact that state-backed threat actors are targeting open source projects via this social engineering route than the technical issues.
I think that the technical issues that the attacker used can be addressed to at least some degree.
If autoconf is a good place to hide stuff, maybe shift away from distributing autoconf-generated files in the source tarball; I don’t know all of the problems that come up with that, but I understand that at least some of them were backwards-compatibility-breaking issues across autoconf versions in the past.
If maintainers are putting up bogus tarballs that differ from what’s in git, have github disallow this, and for projects that are currently doing it, have github figure out how to address their needs otherwise. If this can’t be done, at the very least, highlight the differences.
The ifunc hooks can probably have some kind of automated auditing added, if they’re a favored vector to hide things.
If, in a project takeover, attackers try to embargo any reporting of security holes and ask that nobody but them be notified, it’d be possible to have some trusted third-party notified.
If threat actors try to inject attacks right before freeze – I understand that Ubuntu may have been a target – then apply greater scrutiny to those changes.
If distros linking code into sshd with distro patches is exposing sshd to security threats that the opensshd team isn’t looking at, disallow that practice in distro policy for some set of security-sensitive projects.
Systemd may be too ready to link libraries into things that don’t require it.
Maybe it makes sense to have a small number of projects that are considered “security-critical” and then require that they only rely on other projects that are also security-critical. That’s not a magic fix, but it maybe tamps down on the damage a supply-chain attack could cause. Still…my suspicion is that if an attacker could get code into something like xz, they could probably ultimately, even with only user-level privileges, figure out ways to escalate to control of a system. I mean, all you need is a user with admin privileges to run something as a user anywhere with their account. Maybe Linux and some other software projects just fundamentally don’t have enough isolation. That is, maybe the typical software package should be expected to run in a sandbox, kind of the way smartphone software or video game console software does. That doesn’t solve everything, but it at least reduces the attack surface.
But the social side of this is a pain. We don’t want to break down the system of trust that lets open-source work well more than is necessary…but clearly, there are people being attacked by people who have a lot of time to spend on putting together tactics to attack them. I’m not sure that your typical open-source maintainer – health issues or no – can realistically constantly be on their guard against coordinated social engineering attacks.
The attacker came via a VPN (well, unless they messed up) and had no history. The (probable) sockpuppets also had no history. It might be a good idea to look for people entering open source projects who have no history and are only visible from a VPN…but my guess is that if we rely on reputation more, attackers will just seek to subvert that as well. In this case, they probably committed non-malicious commits for the purpose of building reputation for years. If you’re willing to put three years into building reputation on a given project, I imagine that you can do something similar to have an account lying in wait for the next open source project to attack. And realistically, my guess is that if we trust non-VPN machines, a state-backed attacker could get ahold of one…it’s maybe more convenient for them to bounce through a VPN. It’s not something that they absolutely have to do.
But without some way to help flag potential attackers, it just seems really problematic from a social standpoint. I mean, it’s a lot harder to run an open-source project if one is constantly having to think “okay, has this person just spent the past three years just building reputation so that they can go bad on me, along with a supporting host of bogus other accounts?” I’m not sure that it’s possible, even for really paranoid people.
That’s also how the most damaging attacks on proprietary software work. At some point all organizations need to trust their members and co-workers need to trust each other - I can’t think of a way to be more miserable at work than having to second guess everyone around you.
Yeah, supply chain attacks can happen. There was that infamous SolarWinds supply chain attack recently. But I think that there are some important mitigating factors there.
Proprietary software companies – unless they’re using something open-source like xz upstream in their supply chain, as it’s not just a “proprietary software world” and “open-source software world” – tend to have someone’s personal information if they’re employed by them. They’re not gonna hire and pay some random name who they know only as a GitHub account through a VPN, certainly not make them maintainer of their software.
Many – not all – proprietary software companies mandate that employees work locally. I’s likely that if I’m working for a US company, a person is also subject to US law enforcement. In contrast, if you have a state-backed group, they’re probably targeting people elsewhere. Whoever the people from the Jia Tan group are, my guess is that it’s good odds that they will probably aim to avoid being in a country that they are targeting. Even if we expose their identities, they probably aren’t going to be directly-impacted by law enforcement. Open source projects hypothetically could do that, I suppose, but normally they’re pretty border-agnostic.
That is, I think that this is going to be specially a challenge for the open-source world, as the attacks are targeting some things that the open-source community is notable for – border-agnosticism, a relatively-low bar to join a project, and often not a lot of personal identity validation.
Yeah, that’s kinda what I was thinking, but you put it more-frankly.
It seems like there’s a lot of potential for this to be corrosive to the community.
XKCD 1200
Qubes OS
LinuxCon + CloudOpen Europe 2014 - Qubes OS - Joanna Rutkowska
It’s been over 10 years already, the desktop is only timidly adding containers, disposable VMs, per-program access permissions, and all that.
Some of it is that a lot of desktop software paradigms weren’t built to operate in that kind of environment, and you can’t just break backwards compatibility without enormous costs.
Wayland’s been banging on that, but there’s a lot to change.
Like, in a traditional desktop environment, the clipboard is designed so that software packages can query its contents, rather than having the contents pushed to it. That lets software snoop on the clipboard.
What’s on the screen and a lot of system state like keys that are down and where the mouse pointer is and so forth wasn’t treated as information that needed to be kept private from an application.
Access to input hardware like controllers aren’t linked to any concept of “focus” or “visibility” in a windowing system. That may-or-may-not matter much for a traditional game controller (well, unless you’re using some system where one inputs a password using a controller), but modern ones have things like microphones. Hell, access to microphones and cameras in general on laptops isn’t normally restricted on a per-app basis for desktop software. From microphone access alone, you can extract keystrokes.
I don’t think that there’s a great way to run isolated game-level 3d graphics in a VM unless you’re gonna have separate hardware.
Something that I’ve wondered about is potential vulnerability via Steam. None of the software there is isolated in a “this might be malicious” sense – not from the rest of the system, not from other software sold via Steam. And Steam is used to distribute free software…I haven’t looked into it, but I don’t think that the bar to get something into Steam is likely super high. And then consider that there are free-to-play games that have to make money however they can, and some of that is going to be selling data, and some of how they do that may be to just offer to run whatever libraries with their game the highest bidder offers. How secure are those supply chains? And on Steam, most of the software is closed source, which makes inspecting what’s going on harder. And that’s before we even get to mods and stuff like that, which are from all over the place.
I mean, let’s say that random library from ad company used by a free-to-play game is sending up the identity of the user on the computer. It has some functionality that slurps in a payload from the network telling it to grab credentials off the existing system, and does so for ten critical users. Would anyone notice? I have a really hard time believing that there’d be any way to pick up on that. Even if you wanted to, you can’t isolate many of these games from the network without breaking their functionality, and there’s no mechanism in in place today isolating them from the user’s storage or other identity information.
I own IL-2 Sturmovik: 1946. It’s published and developed out of Russia, and the publisher, 1C, has apparently even been sanctioned as part of general sanctions against Russia. Russia is at war with Ukraine, and we in the US are supplying Ukraine. 1C runs a lot of software on user computers and can push updates to it. If the Russian authorities come knocking on 1C’s door and want a change made to some library, keeping in mind 1C’s position, are they going to say “no”? Keep in mind that what they say may determine whether the company survives an already-difficult environment, and that they may have no idea the full extent of what the state has going on. Now, okay, sure, probably – hopefully – there aren’t US military people or defense contractors running IL-2 Sturmovik directly on critical systems. But…let’s say that they run it at home. How carefully do they isolate their credentials and home information on that system? Does their home machine ever VPN in to work? Is there personal information – such as access to personal email accounts – that could be used for kompromat on such systems?
I’ve managed to get some Ren’Py software (no 3d requirements, normally limited access to input hardware required, one common codebase for most functionality, can generally use one’s local Ren’Py engine running games instead of using the binaries provided, all favorable characteristics for sandbox) running in firejail (and in the process, discovered that one of the games I had was talking to a chat channel…this was described in the source as reporting numbers of users, and the game is a noncommercial effort, but chat channels have been used for commanding botnets before, and even if it’s not malicious, if it can do that without attracting attention, I’d very much expect that malicious software could do so). That is about the extent of my attempts to really sandbox games, and even with that very limited and superficial effort, I already ran into something that I’d have some security concerns about. My guess is that there are a lot of holes out there, even if unintentional.
As things stand, Valve and similar app store operators have no incentive to isolate what they distribute, so if they do so, it’s out of some kind of general sense of responsibility to users. Users generally don’t have the technical expertise to understand what the security implications of Valve’s decisions are, so they can’t take that into account in purchasing decisions. We could mandate something like strict liability to Valve and other app store vendors or maybe OS vendors in the event of compromise – that’d probably make them introduce isolation for software that they distribute. But there’d be some real costs to that. It’d make games more expensive. It might make it harder for smaller “app stores” like gog.com, itch.io, or Lutris to operate. I use Debian. Debian doesn’t cost anything, and if you put the Debian project in the position where it may be legally liable, they’re gonna have to charge for their OS to cover those costs. With charging probably comes DRM. With DRM probably comes restrictions on what one can do with software, which smashes into problems with open-source software. It’s definitely a problem.
FreeBSD jails were way ahead of their time