Why is Signal almost universally defended whenever another security flaw is discovered? They’re not secure, they don’t address security issues, and their business model is unsustainable in the long term.
But, but, if you have malware “you have bigger problems”. But, but, an attacker would have to have “physical access” to exploit this. Wow, such bullshit. Do some of you people really understand what you’re posting?
But, but, “windows is compromised right out of the box”. Yes…and?
But, but, “Signal doesn’t claim to be secure”. Fuck off, yes they do.
But, but, “just use disk encryption”. Just…no…WTF?
Anybody using Signal for secure messaging is misguided. Any on of your recipients could be using the desktop app and there’s no way to know unless they tell you. On top of that, all messages filter through Signal’s servers, adding a single-point-of-failure to everything. Take away the servers, no more Signal.
If someone can read my Signal keys on my desktop, they can also:
Replace my Signal app with a maliciously modified version
Install a program that sends the contents of my desktop notifications (likely including Signal messages) somewhere
Install a keylogger
Run a program that captures screenshots when certain conditions are met
[a long list of other malware things]
Signal should change this because it would add a little friction to a certain type of attack, but a messaging app designed for ease of use and mainstream acceptance cannot provide a lot of protection against an attacker who has already gained the ability to run arbitrary code on your user account.
If you read anything, at least read this link to self correct.
This is a common area where non-security professionals out themselves as not actually being such: The broken/fallacy reasoning about security risk management. Generally the same “Dismissive security by way of ignorance” premises.
It’s fundamentally the same as “safety” (Think OSHA and CSB) The same thought processes, the same risk models, the same risk factors…etc
And similarly the same negligence towards filling in holes in your “swiss cheese model”.
“Oh that can’t happen because that would mean x,y,z would have to happen and those are even worse”
“Oh that’s not possible because A happening means C would have to happen first, so we don’t need to consider this is a risk”
…etc
The same logic you’re using is the same logic that the industry has decades of evidence showing how wrong it is.
Decades of evidence indicating that you are wrong, you know infinitely less than you think you do, and you most definitely are not capable of exhaustively enumerating all influencing factors. No one is. It’s beyond arrogant for anyone to think that they could 🤦🤦 🤦
Thus, most risks are considered valid risks (this doesn’t necessarily mean they are all mitigatable though). Each risk is a hole in your model. And each hole is in itself at a unique risk of lining up with other holes, and developing into an actual safety or security incident.
In this case
signal was alerted to this over 6 years ago
the framework they use for the desktop app already has built-in features for this problem.
this is a common problem with common solutions that are industry-wide.
someone has already made a pull request to enable the electron safe storage API. And signal has ignored it.
Thus this is just straight up negligence on their part.
There’s not really much in the way of good excuses here. We’re talking about a run of the mill problem that has baked in solutions in most major frameworks including the one signal uses.
I was just nodding along, reading your post thinking, yup, agreed. Until I saw there was a PR to fix it that signal ignored, that seems odd and there must be some mitigating circumstances on why they haven’t merged it.
Those are outside Signal’s scope and depend entirely on your OS and your (or your sysadmin’s) security practices (eg. I’m almost sure in linux you need extra privileges for those things on top of just read access to the user’s home directory).
The point is, why didn’t the Signal devs code it the proper way and obtain the credentials every time (interactively from the user or automatically via the OS password manager) instead of just storing them in plain text?
They’re arguing a red herring. They don’t understand security risk modeling, argument about signals scope let’s their broken premise dig deeper. It’s fundamentally flawed.
It’s a risk and should be mitigated using common tools already provided by every major operating system (ie. Keychain).
I don’t see the reasoning in your answer (I do see its passive-aggressiveness, but chose to ignore it).
I asked “why?”; does your reply mean “because lack of manpower”, “because lack of skill” or something else entirely?
In case you are new to the FOSS world, that being “open source” doesn’t mean that something cannot be criticized or that people without the skill (or time!) to submit PRs must shut the fu*k up.
98% of desktop apps (at least on Windows and Linux) are already broken by design anyways. Any one app can spy on and keylog all other apps, all your home folder data, everything. And anyone can write a desktop app, so only using solutions that (currently) don’t have a desktop app version, seems silly to me.
If you have root you could just update the kernel to one that lets you do whatever you want on the system, so there’s no way to stop the attacker from viewing the passwords if the app is capable of displaying them.
I’d tried matix but without a high level of technical experience it was pretty difficult to setup. I got as far as docker, that needed ansible, that wouldn’t compile. I also recall there was services I could pay for, but then I’d rely on them to provide the security/servers.
Matrix doesn’t seem for the majority of people taking a first step away from big tech.
I would only ever suggest matrix if you’re running a private self-hosted instance that is NOT federated, which you can do even easier with Signal anyways.
Looked into anarc blog. What there wss said about Matrix can be said about SMTP and probably XMPP. To do GDPR you need to know every server you have sent message to. And compared to IRC defaults(forward and remove) anything will look like GDPR nightmare. GDPR was not designed for federated(like matrix and activitypub) communications and especially wasn’t designed for peer-to-peer communications.
You’re right, there isn’t one, my apologies; I edited the comment.
You could use some kind of encrypted container on the desktop though, or maybe run it as a separate user that has an encrypted home folder. The problem is you need to define a threat model first. Depending on what you’re afraid of, any particular “solution” could either be way overkill, or never enough.
But, but, “just use disk encryption”. Just…no…WTF?
So not encrypting keys is bad, but actually encrypting them is bad too? Ok.
Any on of your recipients could be using the desktop app and there’s no way to know unless they tell you.
Another applefan? How it THIS supposed to be in scope of E2EE? Moreover, how having a way to know if recepient is using desktop app is not opposite of privacy?
On top of that, all messages filter through Signal’s servers, adding a single-point-of-failure to everything. Take away the servers, no more Signal.
Indeed. This is why I use Matrix. Also, fuck showing phone numbers to everyone(I heard they did something about it) and registration with phone numbers.
Any “secure” so that relies on someone else for security is not secure.
Fuck the scope of E2EE. Signal makes a lot of claims on their website that are laughable. The desktop app is their main weakness. Attachments are stored unencrypted, keys in plaintext. If they were serious about security, they would depricate the windows app and block it from their servers.
Any “secure” so that relies on someone else for security is not secure.
Fuck the scope of E2EE.
When someone has FSB/NSA agent behind them reading messages, no amount of encryption will help. Biggest cybersecurity vulnreability is located between monitor and chair. When you are texting someone else, that someone else’s chair-monitor space is also vulnreable.
Signal makes a lot of claims on their website that are laughable.
Well, maybe. I didn’t read their claims, nor I use signal.
Attachments are stored unencrypted, keys in plaintext.
Is OS-level encryption plaintext or not? If yes, then they are encrypted, provided user enables such feature in OS. If not - nothing if encrypted fundamentally.
If they were serious about security, they would depricate the windows app and block it from their servers.
WTF does Apple have to do with anything?
You just used applefans’ argument. Yeah, I wonder what.
Why is Signal almost universally defended whenever another security flaw is discovered? They’re not secure, they don’t address security issues, and their business model is unsustainable in the long term.
But, but, if you have malware “you have bigger problems”. But, but, an attacker would have to have “physical access” to exploit this. Wow, such bullshit. Do some of you people really understand what you’re posting?
But, but, “windows is compromised right out of the box”. Yes…and?
But, but, “Signal doesn’t claim to be secure”. Fuck off, yes they do.
But, but, “just use disk encryption”. Just…no…WTF?
Anybody using Signal for secure messaging is misguided. Any on of your recipients could be using the desktop app and there’s no way to know unless they tell you. On top of that, all messages filter through Signal’s servers, adding a single-point-of-failure to everything. Take away the servers, no more Signal.
If someone can read my Signal keys on my desktop, they can also:
Signal should change this because it would add a little friction to a certain type of attack, but a messaging app designed for ease of use and mainstream acceptance cannot provide a lot of protection against an attacker who has already gained the ability to run arbitrary code on your user account.
Not necessarily.
https://en.m.wikipedia.org/wiki/Swiss_cheese_model
If you read anything, at least read this link to self correct.
This is a common area where non-security professionals out themselves as not actually being such: The broken/fallacy reasoning about security risk management. Generally the same “Dismissive security by way of ignorance” premises.
It’s fundamentally the same as “safety” (Think OSHA and CSB) The same thought processes, the same risk models, the same risk factors…etc
And similarly the same negligence towards filling in holes in your “swiss cheese model”.
…etc
The same logic you’re using is the same logic that the industry has decades of evidence showing how wrong it is.
Decades of evidence indicating that you are wrong, you know infinitely less than you think you do, and you most definitely are not capable of exhaustively enumerating all influencing factors. No one is. It’s beyond arrogant for anyone to think that they could 🤦🤦 🤦
Thus, most risks are considered valid risks (this doesn’t necessarily mean they are all mitigatable though). Each risk is a hole in your model. And each hole is in itself at a unique risk of lining up with other holes, and developing into an actual safety or security incident.
In this case
Thus this is just straight up negligence on their part.
There’s not really much in the way of good excuses here. We’re talking about a run of the mill problem that has baked in solutions in most major frameworks including the one signal uses.
https://www.electronjs.org/docs/latest/api/safe-storage
I was just nodding along, reading your post thinking, yup, agreed. Until I saw there was a PR to fix it that signal ignored, that seems odd and there must be some mitigating circumstances on why they haven’t merged it.
Otherwise that’s just inexcusable.
The PR had some issues regarding files that were pushed that shouldn’t have been, adding refactors that should have been in separate PRs, etc…
Though the main reason is that Signal doesn’t consider this issue a part of their threat model.
Those are outside Signal’s scope and depend entirely on your OS and your (or your sysadmin’s) security practices (eg. I’m almost sure in linux you need extra privileges for those things on top of just read access to the user’s home directory).
The point is, why didn’t the Signal devs code it the proper way and obtain the credentials every time (interactively from the user or automatically via the OS password manager) instead of just storing them in plain text?
They’re arguing a red herring. They don’t understand security risk modeling, argument about signals scope let’s their broken premise dig deeper. It’s fundamentally flawed.
It’s a risk and should be mitigated using common tools already provided by every major operating system (ie. Keychain).
“Highways shouldn’t have guard rails because if you hit one you’ve already gone off the road anyway.”
You’d need write access to the user’s home directory, but doing something with desktop notifications on modern Linux is as simple as
dbus-monitor "interface='org.freedesktop.Notifications'" | grep --line-buffered "member=Notify\|string" | [insert command here]
Replacing the Signal app for that user also doesn’t require elevated privileges unless the home directory is mounted
noexec
.Feel free to submit a pull request. We could use your help.
I don’t see the reasoning in your answer (I do see its passive-aggressiveness, but chose to ignore it).
I asked “why?”; does your reply mean “because lack of manpower”, “because lack of skill” or something else entirely?
In case you are new to the FOSS world, that being “open source” doesn’t mean that something cannot be criticized or that people without the skill (or time!) to submit PRs must shut the fu*k up.
It’s in the draft phase from what I can see.
https://github.com/signalapp/Signal-Desktop/pull/6849
(for Android) https://molly.im/ restores the encryption to this file and adds other useful things
deleted by creator
98% of desktop apps (at least on Windows and Linux) are already broken by design anyways. Any one app can spy on and keylog all other apps, all your home folder data, everything. And anyone can write a desktop app, so only using solutions that (currently) don’t have a desktop app version, seems silly to me.
Linux has a sandbox solution growing in popularity, flatpak.
And Wayland. Xorg is a complete and utter mess
I don’t think apps can read keystrokes for other apps on Wayland.
Unless you have root
If you have root you could just update the kernel to one that lets you do whatever you want on the system, so there’s no way to stop the attacker from viewing the passwords if the app is capable of displaying them.
Wayland doesn’t magically make other kinds of keyloggers stop working altogether though.
https://old.reddit.com/r/linux/comments/23mj49/wayland_is_not_immune_to_keyloggers/
https://github.com/Aishou/wayland-keylogger
https://github.com/schauveau/sway-keylogger
https://old.reddit.com/r/kde/comments/11h5tvl/wayland_security_keyloggers_are_back/
Now replace “signal” in your comment with “ssh” and think it over.
deleted by creator
Ah the old Lemmy SHHwitcharoo.
SSHwitcharoo
Thank you.
Whats the next best alternative?
Meeting in person.
I’ll organise a time and place to meet in person via … Carrier pigeon?
We’re citizens raging against phones Lazlow.
With a helicopter over you, loud music next to you, and a dude mowing next to you.
And no smartphone in your pocket, of course.
That depends on your threat model. What are you worried about?
Matrix or xmpp, bonus points with a personal server
Thanks to interest of late, the conversations and gajim apps have come a long way in recent years, and matrix has made good strides too with element-x
I’d tried matix but without a high level of technical experience it was pretty difficult to setup. I got as far as docker, that needed ansible, that wouldn’t compile. I also recall there was services I could pay for, but then I’d rely on them to provide the security/servers.
Matrix doesn’t seem for the majority of people taking a first step away from big tech.
Snikket is meant to be super simple to self-host. Ejabberd has a web GUI that can make configuration easier.
I would only ever suggest matrix if you’re running a private self-hosted instance that is NOT federated, which you can do even easier with Signal anyways.
That’s fine, but why?
It is a privacy and GDPR nightmare, basically all federated services right now are.
https://github.com/libremonde-org/paper-research-privacy-matrix.org/blob/master/part1/README.md
https://web.archive.org/web/20240611200030/https://hackea.org/notas/matrix.html
https://anarc.at/blog/2022-06-17-matrix-notes/
https://web.archive.org/web/20210804205638/https://serpentsec.1337.cx/matrix
Interesting, thanks for the links I’ll take a look
Looked into anarc blog. What there wss said about Matrix can be said about SMTP and probably XMPP. To do GDPR you need to know every server you have sent message to. And compared to IRC defaults(forward and remove) anything will look like GDPR nightmare. GDPR was not designed for federated(like matrix and activitypub) communications and especially wasn’t designed for peer-to-peer communications.
Only with appservices. Doesn’t make sense otherwise.
(for Android) https://molly.im/
I can find the desktop client, am I missing something?
You’re right, there isn’t one, my apologies; I edited the comment.
You could use some kind of encrypted container on the desktop though, or maybe run it as a separate user that has an encrypted home folder. The problem is you need to define a threat model first. Depending on what you’re afraid of, any particular “solution” could either be way overkill, or never enough.
I hope you are joking
Basically for the same reason people often defend apple: the user interface is shiny, and they claim to be privacy oriented.
Signal is a centralized US hosted service, that alone should be enough to disqualify it, outside of our many other criticisms.
So not encrypting keys is bad, but actually encrypting them is bad too? Ok.
Another applefan? How it THIS supposed to be in scope of E2EE? Moreover, how having a way to know if recepient is using desktop app is not opposite of privacy?
Indeed. This is why I use Matrix. Also, fuck showing phone numbers to everyone(I heard they did something about it) and registration with phone numbers.
Any “secure” so that relies on someone else for security is not secure.
Fuck the scope of E2EE. Signal makes a lot of claims on their website that are laughable. The desktop app is their main weakness. Attachments are stored unencrypted, keys in plaintext. If they were serious about security, they would depricate the windows app and block it from their servers.
WTF does Apple have to do with anything?
When someone has FSB/NSA agent behind them reading messages, no amount of encryption will help. Biggest cybersecurity vulnreability is located between monitor and chair. When you are texting someone else, that someone else’s chair-monitor space is also vulnreable.
Well, maybe. I didn’t read their claims, nor I use signal.
Is OS-level encryption plaintext or not? If yes, then they are encrypted, provided user enables such feature in OS. If not - nothing if encrypted fundamentally.
You just used applefans’ argument. Yeah, I wonder what.
Your opinions are invalid.
What app stops a pre install keylogger. I’m all for hearing criticism of Signal but it’s always about things they can’t control.
They can’t control if the encryption keys are stored in plaintext?
Ok I didn’t mean that in particular