Best Buy’s new anti-privacy practices are even more awful than before. They’ve lost any business I’d ever be sending them after this.
Best Buy’s new anti-privacy practices are even more awful than before. They’ve lost any business I’d ever be sending them after this.
How to escape Honda’s privacy hell:
With sensors, microphones, and cameras, cars collect way more data than needed to operate the vehicle. They also share and sell that information to third parties, something many Americans don’t realize they’re opting into when they buy these cars. Companies are quick to flaunt their privacy policies, but those amount to pages upon pages of legalese that leave even professionals stumped about what exactly car companies collect and where that information might go.
So what can they collect?
“Pretty much everything,” said Misha Rykov, a research associate at the Mozilla Foundation, who worked on the car-privacy report. “Sex-life data, biometric data, demographic, race, sexual orientation, gender — everything.”
That’s despicable. Shame on you, Honda. Mozilla’s privacy report says their competitors are all pretty bad, too.
If you live in a state with a privacy law, you can and should write to your car’s manufacturer and demand that they show you all the information they collect about you, that they delete it all, that they not share it with anyone else, and that they limit how they use your data only to provide the services you’ve requested from them. These are your legal rights and manufacturers are legally obligated to respect them, even if it’s inconvenient and expensive for them. In fact, I think it’s our duty as citizens to make it cost companies more to process millions of our opt-out requests than they make selling our personal information.
My employer’s HR department asked me to validate a coworker’s identification documents and attest that they’re legitimate, for government tax form purposes.
I got an email from our payroll vendor, TriNet, with a link to attest to those documents’ authenticity. Clicking it took me to a page with scans of my friend’s driver’s license and Social Security card without requiring me to log in first. My coworker hadn’t entered their driver’s license number into the form, so I used the scanned image to enter it for them.
That’s pretty messed up. Good thing TriNet didn’t send that link to the wrong person, or they would have shared my colleague’s personally identifiable information with random strangers.
If your company uses TriNet, ask them for more information about this terrible, horrible, no good, very bad process, and how it got past design review. Their whole job is managing private payroll information. They’re not very good at it.
Update 2023-11-03: No, I don’t.
I tell people not to use Readdle’s Spark email app. Then I turn around and use the Things task manager, which lacks end-to-end encryption (E2EE). That concerns me. I have a PKM note called “Task managers”, and under “Things” my first bullet point is:
I realize I’m being hypocritical here, but perhaps only a little bit. There’s a difference in exposure between Things and, say, my PKM notes, archive of scanned documents, email, etc.:
I don’t put highly sensitive information in Things. No, I don’t want my actions in there to be public, but they’re generally no more detailed than “make an allergist appointment” or “ask boss about a raise”. I know some people use Things as a general note-taking app but I don’t. There are other apps more tailored to that and I use them instead.
I control what information goes into Things. If my doctor were to email me sensitive medical test results, the Spark team could hypothetically read them. Cultured Code can only view what I personally choose to put into Things. (That glosses over the “Mail to Things” feature, but I never give that address to anyone else and I don’t worry about it being misused.)
Things can’t impersonate me. Readdle could use my email credentials to contact my boss and pretend to be me. Now, I’m confident that they won’t. They’re a good, reputable company. But they could, and that’s enough to keep me away from Spark.
Finally, Cultured Code is a German company covered by the GDPR. They have strong governmental reasons not to do shady stuff with my data.
While I don’t like that Things lacks E2EE, and I wish that it had it, the lack isn’t important enough for how I want to use it to keep me away from it. There are more secure alternatives like OmniFocus and Reminders, but the benefits that I get from Things over those options makes it worthwhile for me to hold my nose and use it.
Everyone has to make that decision based on their own usage. If you have actions like “send government documents to reporter” or “call patient Amy Jones to tell her about her cancer”, then you shouldn’t use Things or anything else without E2EE. I’d be peeved if my Things actions were leaked, but it wouldn’t ruin my life or get me fired.
But I know I should still look for something more secure.
Apple’s email apps and services don’t allow users to completely block senders. If someone is sending you messages you don’t want to receive, tough. You’re going to get them.
The iCloud.com website’s Mail app doesn’t have a sender block mechanism. Instead, it offers a way to create rules based on each message’s attributes, such as its sender. Rules support these actions:
None of those actions are the same as bouncing or silently discarding an email. At most, you can have the email sent to your Trash folder.
Mail.app on a Mac allows you to mark senders as “blocked”. You can configure Mail.app’s junk mail filters to either “Mark [their message] as blocked mail, but leave it in my Inbox” or “Move it to the Trash”. Again, you can’t bounce or discard it.
I tried to be clever and write an AppleScript program to delete messages from my Mac’s Trash folder. That was a dead end because AppleScript’s idea of deleting an email is moving it to Trash, even if it’s already in there. Neither does it offer a way to automatically empty the trash.
Apple, this is disappointing. If I’m blocking someone, I don’t want to hear from them at all, ever. It’s not enough to send their messages to the Trash folder. I don’t want them to be in my email account at all.
The folks at Conscious Digital have a nifty website, yourdigitalrights.org, that makes it easy to file a CCPA or GDPR request asking a website to remove all of your data.
In particular, they make it easy to delete all your Twitter data.
Delete your account, regain your privacy, and cause someone an administrative hassle with a single click. It’s perfect!
This has been an odd week. Last Friday I got an email from someone asking about my hobby website’s CCPA compliance, ending with
I look forward to your reply without undue delay and at most within 45 days of this email, as required by Section 1798.130 of the California Civil Code.
The message sounded more legitimate than the usual spam I get, as it was asking about a real law in the jurisdiction where I live, and because it referred to a real website that I operate. That last line looked to my not-a-lawyer eyes like something a professional litigant might send out when they’re trying to gather information before deciding whether to sue someone. Mass frivolous lawsuits are a thing, after all, and I dreaded the idea that I might have had to defend my personal project in court.
This Friday, a friend told me that a researcher at Princeton sent the emails as part of a study on CCPA compliance they’re conducting with Radboud University. That changed my whole outlook: the letter came from a fake person with a fake email domain, lying about their intentions, and lying that the CCPA required me to reply to it. The stress it caused me wasn’t fake, though.
I submitted a link to my story to Hacker News, which a few people saw. Then someone else submitted another story and it took on a life of its own. It turned out that a lot of people got these emails. The researchers stated that they used the Tranco database of “popular” websites, and my tiny little site was only ranked as high as about number 350,000 in that list. I wasn’t alone. Princeton sent similar emails to other personal projects, and stories abounded that companies had hired counsel and incurred legal expenses to reply to complete fabrications. People had been frightened and were becoming angry.
Based on advice from Hacker News readers, I contacted Princeton’s Research Integrity & Compliance department and Institutional Review Board, and Radboud’s Research Data Management and Ethics Committee with my concerns. Radboud responded quickly. Princeton hasn’t responded.
What especially bothers me is that I think this is an important subject to study. I’m a Californian and I support the CCPA protecting my privacy. I want to know if companies are complying with their legal obligations, and I think a large research university like Princeton is the right kind of entity to conduct an effective study. I also believe that the researchers had the right intentions and wanted to do a good job. My problem with it is that I think they made a grave error in misrepresenting their legitimate research questions as coming from a fictional person, and wrote it in a way that set off a lot of “oh no, I think I’m about to be sued” alarms.
I suspect the data collected from misled responses is corrupted beyond repair. For instance, many entities who replied are likely to have formulated a policy solely because they received the email. I think, then, that the appropriate next steps for Princeton and Radboud are to immediately send explanation and apology emails to all the recipients of the original emails, and to delete all responses they received from recipients of the misleading messages.
This was such an unnecessary mess. It’s a shame because this could have been crafted in a way that resulted in better data and without scaring the research subjects. Do better next time, Princeton.
Update 2021-12-2: The researches updated their website to read, in part:
Our top priority has been issuing a one-time follow-up message that identifies our study and that recommends disregarding prior email. We are sending those messages.
We have also received consistent feedback encouraging us to promptly discard responses to study email. We agree, and we will delete all response data on December 31, 2021.
Apple released their new AirTag product six months ago, and as competent as it is for finding lost gear, Apple’s done everything possible to hamstring the little device to make it frustrating to use.
The product idea is simple: you buy one and attach it to something you don’t want to misplace, like your car keys. Then you can use your iPhone to locate that thing when you inevitably misplace it. For that one specific use case, and if you live alone, AirTag is magical. The “Find My” app tells you how far and in what direction the lost device is so that you can walk right up to it. I’ve owned and used various Tile devices before, and AirTags are easier to use and work better. From a hardware standpoint, I can’t imagine what I’d improve about them. However, Apple’s software decisions are constraining the lovely hardware to the point that I don’t want to use it anymore.
All of AirTag’s problems come down to a single issue: Apple is afraid that someone will use an AirTag to stalk another person, to the point that they’ve deliberately encumbered it to near uselessness:
Apple claims that AirTags are meant for lost items, not stolen ones, but that’s a smokescreen for the fact that they haven’t figured out how to reconcile privacy with having the things work as expected. Despite their claims, of course they’re for recovering stolen items! If it weren’t for the disastrous software features, they’d be perfect for tracking down a purse thief or the person who stole your kid’s bike. Apple is selling a soup spoon, then acting shocked and dismayed when someone wants to use it to eat stew. If Apple can’t see why someone would naturally want to use an AirTag to get stolen things back, then that’s a telling failure of their imagination.
Anti-tracking features are good. No one wants to enable stalkers and I don’t blame Apple for that. However, they’re so paralyzed by even the possibility that someone might use an AirTag in a bad way that they’ve made it useless for a bunch of good ways. If Apple’s going to lock it down this hard, they shouldn’t have bothered releasing AirTag to the public. It would have been far less frustrating if it had never left the design lab.
I wanted to love AirTags, but I regret my purchases. It could have been a wonderful little gadget had Apple defined it by its possibilities instead of its limitations. I won’t be buying more.
GoDaddy gives Texas abortion website notice: Find new host ASAP:
The highly controversial and regressive Texas abortion law went into effect on September 1. With the law comes the Texas Right to Life group’s website where anyone can submit allegations that a woman had an abortion past the state’s six-week cutoff mark. The state’s new abortion law also allows private citizens to target anyone accused of helping facilitate an abortion.
[…]
Amid the hacktivism is an outcry directed at GoDaddy, the company that hosts the website. Many have called on the company to cut off its services to Texas Right to Life, a call that has been heard. According to a statement GoDaddy provided to The New York Times, Texas Right to Life has been given 24 hours to find a different host for its website.
Even GoDaddy, of creepily sexy advertising fame, knows the Texas neighbor-stalking website is immoral.
I don’t ever want to hear another word about “government overreach” from the Texas GOP. Not a word.
As someone who deals with HIPAA’s privacy compliance as part of my job, I don’t ever want to hear the word HIPAA again from someone who isn’t adjacent to healthcare. Almost no one understands what it is, but a hundred million people are explaining their wrong ideas of it to each other in a giant game of telephone.
Here’s a short summary of HIPAA’s Privacy Rule, as described by the U.S. Department of Health & Human Services:
Who it applies to: a healthcare provider such as a doctor or hospital, health plans, their business associates, and other people who manage patients’ healthcare information.
What it does: limit the information a covered entity can share about their patients to fulfill specific medical and business requirements.
What it doesn’t do: apply to anyone else except those covered entities; prevent you from sharing your own information; prevent others from asking you about your health, including vaccination status.
Anyone who says that doesn’t allow you to ask whether they’ve been vaccinated, or prevents them from answering, is factually wrong.