My FCC Net Neutrality Letter
This is my letter to the FCC on September 12, 2014 regarding the upcoming net neutrality decision making process:
I am a Comcast customer, and I am paying them for a 100 million bit per second connection. Comcast has a monthly data cap of 300 billion bytes (or about 3 trillion bits) per month. At the speeds I’m paying full price for, I can use up my entire monthly data allotment in about 8 hours.
More simply, my monthly Comcast payment entitles me to use my Internet connection at full speed for one third of one day per month.
Esteemed colleagues, I find it disingenuous that Comcast and their peers claim that they need to charge more to carry the services I want to use, all while constricting my paid usage to one ninetieth of my connection’s capacity and raking in record profits. There is simply no fiscal credibility to their claims and I urge you to look upon them with due skepticism.
The FCC has received millions of letters supporting net neutrality rules against Internet slow lanes. Most of these have been form letters written by various citizen-friendly organizations and submitted by casual site visitors. Most of the individually written letters are various restatements of why net neutrality is important. All of those are good, but it’s also important to remind readers of these letters that anti-free-market groups like NCTA and its constituents have no legitimate counterarguments. They claim to need Internet slow and fast lanes to make money, but the industry makes huge amounts of money while delivering some of the worst Internet service in the developed world.
Comcast earned 3.3 billion dollars in net income in the second quarter of 2014, all while allowing customers to use only one ninetieth of the utility they’ve paid for. The only valid explanation for their strident opposition to net neutrality is sheer greed.
Cut Hoodies Some Slack
No good article about the Bay Area misses jokes at hipsters in their hoodies, whether biking through The Mission or chairing board meetings in The Valley. It’s an easy laugh and a nodding wink to your audience to assure them that you’re on their side, that you know how silly grown adults look in their kiddie jackets. But consider:
- San Francisco is walkable, and people take advantage of it. My stroll from the bus terminal to my office is about a mile, and the sidewalks teem the whole way.
- Layering is crucial. The weather changes rapidly from warm to cold, gray and windy and then back. Clothes have to adapt from comfortably light to guarding from the elements quickly and easily.
- The city is humid. A light sweat from walking stays on you, and nylon clothes become waterlogged and sticky within blocks.
- The city is windy. Synthetic fleece jackets are great, until a breeze picks up and cuts through the coarse cloth. I’ve never been so cold as when I was near the shore in a thick fleece.
- Sun gives way to drizzle in minutes. Between the rain and the wind, it’s always smart to pack a hat.
Distilled, that means the ideal outerwear is of natural fiber to let sweat through while keeping wind out. It has a zipper and can go from breezy to windproof. It has a hat.
You know: a hoodie.
The humble jacket is a perfect fit for the local climate, where the weather is rarely great but is never bad. They’re warm in the winter and protective in the summer. You can buy one for a few dollars from street vendors, or spend more for a handmade work of urban art.
Making fun of a San Franciscan for wearing a hoodie is like teasing a Minnesotan for wearing a coat and scarf. Yes, we love our hooded jackets. Why shouldn’t we?
Scaling with Eventual Consistency
Originally published on the Crittercism Engineering Blog and reprinted with permission.
by Kirk Strauser on April 8, 2014
CAP theorem hates you and wants you to be unhappy
Some guy who isn’t fun at parties came up with the CAP theorem, which basically says it’s impossible to be consistent and available at the same time. In short, things will break and clients will lose access to a storage backend, or units in a storage cluster will lose the ability to talk to their peers. Maybe those servers are crashed. Even worse, maybe they’re up and running but a network outage means they can’t reach each other, and each part is still accepting writes from clients. Our enemy, CAP theorem, says we have to choose between:
- Keeping our data consistent, at the price of not being able to make progress when parts of the database cluster are unavailable.
- Keeping the database cluster available, at the price of some parts of it being out of sync with others and resolving any conflicts later.
Consistency brings pain
In any case, we have to decide what happens when we want to write to a record. Let’s assume for demonstration sake that a record is a bag of opaque data that the backing store doesn’t really understand; imagine a JSON blob, or a bit of XML, or whatever other datatype your favorite database doesn’t natively support.
Let’s also assume we have a consistent database. Either it’s a single server that’s running or not running, or it’s a cluster that only accepts requests if all nodes are online and synchronized. In short, we can always trust our database to Do The Right Thing.
Here’s how consistent workflows evolve from the most innocent of intentions.
First attempt: blind writes
We want to write a record, so we write it! Easy-peasy.
- Write out an entire record
- Profit
Second attempt: read-update-write
Ouch! Two requests want to update the same record. Both of them write out its entire contents, but only the last one wins.
- Request A writes out
{"foo": "bar"}
- Request B writes out
{"baz": "qux"}
- Request A cries salty tears
- Request B gloats and gets punched
That’s not good. The answer, then, is surely to read what’s there, update it, and write the results back out:
- Request A fetches the record with its initial value of
{}
- Request A updates the record to
{"foo": "bar"}
- Request A writes the record with the its new value
- Request B fetches the record with A’s value of
{"foo": "bar"}
- Request B updates the record to
{"foo": "bar", "baz": "qux"}
- Request B writes the record with the combined value
They shake hands and go home. And at 2AM, the Ops pager goes off because every write requires a read to get the pre-existing value. But let’s pretend IO is free and infinite. This algorithm is chock-full of race conditions. At our scale, here’s what’s going to happen many times per second:
- Request A fetches the record with its initial value of
{}
- Request B fetches the record with its initial value of
{}
- Request A updates the record to
{"foo": "bar"}
- Request B updates the record to
{"baz": "qux"}
- Request A writes the record with only its new value
- Request B writes the record with only its new value, overwriting A’s
And now we’re right back where we started.
Third attempt: locks everywhere!
Looks like we’ll need to lock each record before updating it so that only one request can mutate it at a time. We care about uptime so we have a highly available distributed locking system (ZooKeeper, Redis, a highly motivated Amazon Mechanical Turk, etc.). Now our workflow looks like:
- Request A acquires a lock on the record
- Request B attempts to acquire the same lock, but fails
- Request A fetches the record with its initial value of
{}
- Request A updates the record to
{"foo": "bar"}
- Request A writes the record with only its new value
- Request A releases the lock
- Request B attempts to acquire the same lock, and succeeds this time
- Request B fetches the record with A’s value of
{"foo": "bar"}
- Request B updates the record to
{"foo": "bar", "baz": "qux"}
- Request B writes the record with the combined value
- Request B releases the lock
That actually worked! Of course, it took two reads and two writes of the database and fives calls to the lock manager and Ops wants to set fire to your cubicle because their call duty phone won’t stop buzzing.
But let’s assume that we have a free and infinite lock manager. What happens if Request A never completes the transaction and releases its lock, maybe because of network problems, or the node it was on died, or it couldn’t write to the database, or [insert your own pet scenario here]. Now we can’t make progress on B’s request until the lock expires, or until we break the lock and potentially overwrite A’s updates. For all our efforts, we’re still not in a much better place than we started.
Side note about the locking manager
Any distributed lock manager has to solve all of the same problems we’re listing here. Even if we use this pattern, we haven’t made the root problem go away: we’ve just shifted it to another piece of software. The CAP theorem means that a lock manager optimized for consistency has to sacrifice availability, so in the event of a network outage or failed locking manager node we still can’t get any work done.
But eventual consistency brings joy and unicorns!
Consistency is critical for many reasons. I’d much rather queries to my bank be slow or unavailable than incorrect! There are times and places when we want the properties that consistency buys, regardless of its price.
But there aren’t many of them at our scale.
What we want is eventual consistency, or a promise that the database will make its best effort to make its records return their current values. This pattern extends to both the database we use to store our records, to the way in which we generate and process those records.
Solution: journaled updates
Instead of treating our record like an atomic chunk of data, we’ll treat it like a list of atomic chunks of data representing updates that clients want to make.
- Request A appends its update of
{"foo": "bar"}
to whatever happens to already be in the record - Request B appends its update of
{"baz": "qux"}
to the record - Much later, if ever, Request C fetches all the values from the record and combines them into a final data structure. In pseudocode:
def materialize(query):
result = dict()
for key, value in query.records():
result[key] = value
return result
In our example, that would fetch the list of updates [{"foo": "bar"}, {"baz": "qux"}]
and combine them into a single record like {"foo": "bar", "baz": "qux"}
. This is a very fast operation for any sane amount of updates.
Our primary usage pattern is “write many, read rarely”. Most of the events recorded by our system will never be viewed individually, but might be used to calculate trends. This solution allows us to trade a small (but fast and easy) bit of post-processing for a huge bit of not having to worry about requests clobbering each other, locking semantics, or mixing reads with writes.
Ordering is still hard
This solution isn’t magic, though. It doesn’t define how to reconcile conflicts between updates, and we still have to make those decisions.
Time and time again
The simplest method is to store each record’s timestamp and replay them in order. However, it’s impossible to guarantee ordering between timestamps generated across more than one host. Two database servers might be off from each other by a few seconds, and NTP only seems like a solution until you’ve actually tried to count on it. The one situation where this is feasible is when requests to update a given record are all generated by the same client. In this case, we can use the client-generated timestamp to reasonably represent the correct ordering of updates.
Understand our data
Another approach is to make smart decisions about the data we’re storing. Suppose a certain key, foo, may only ever increase. Given a set of updates like [{"foo": 23, "foo": 42, "foo": 17}]
, the correct resolution would be {"foo": 42}
. This requires an understanding of the data, though, and isn’t something we’d want to pursue for customer-generated inputs.
TL;DR
Math says you can’t have both consistency and availability. At our scale, availability wins the argument and eventual consistency carries the day.
Wet Shaving: A Year Later
I’m a sucker for the idea of ritual. When I learn about a traditional, labor-intensive practice like shining shoes, oiling boots, or a complicated car washing regimen, I’m always drawn to try it myself. I imagine having the same meditative experience as the person convincing me to try their routine: feeling a connection to my ancestors, appreciating the finer things, tasting the rewards of patience, and such. So when I read an article about wet shaving a year ago, I could hardly wait to get started.

In practice, though, I hate ritual. I’ll pay a few bucks to have someone else shine my shoes. San Francisco Bay Area climate isn’t very hard on boots, whether I’ve diligently oiled them or not. Automatic car washes are popular for a reason. Basically, I run out of patience for things that take too long just for the sake of taking too long.
One recent morning, I found myself wondering if I actually enjoyed wet shaving or if I’d be better off going back to a can of foam and an 8-bladed disposable razor. Millions of guys do it the new way, after all - should I rejoin them?
No. For me, wet shaving is clearly better for two specific reasons:
- It’s way cheaper. It’s like the laser printer business model of charging more up front but offering dirt cheap supplies. After the initial purchase, consumables cost less than $10 a year.
- I haven’t had a single ingrown hair since I started. Modern razors always leave me with a few bumps on my neck and cheekbones, but that problem has completely disappeared.
Yes, it takes longer than I’d like and still carries more trappings of ritual than I care to think about. Still, it’s a little luxury that’s measurably nicer and I don’t think I’ll give it up.
I use and happily recommend:
- Merkur 34C heavy duty razor
- Tweezerman shaving brush
- Art of Shaving classic stand
- Astra Super Platinum blades
- Art of Shaving pre shave oil
Update:
Garrick Dee wrote another nice introduction to the subject at the Grooming Essentials blog.
Great Expectations
I probably sound like I gripe all the time, but that’s really not what I’m like. I’m an optimist and happy by nature. It’s just that I have high expectations for how things could be and I’m disappointed when I see people fall short of their potential. I don’t complain about companies that are trying their best but fall short. I call out the ones that could be so much better but don’t seem to have the desire to see it through.
Making Devonthink Sync Between Computers
Update: 2021-05-27
This is still getting traffic for unknown reasons. Today, in 2021, the problem is long solved. DEVONthink 3 syncs perfectly with itself and with DEVONthink To Go. Again, this is purely historical and not a reflection of the state of things today.
Also, I have no idea why this post is suddenly so popular again. Help me out and let me know how you found this page? I’d sure appreciate it!
Update: 2016-09-17
In July 2016, DEVONtechnologies released DEVONthink 2.9 with an entirely new sync engine. It’s like a brand new program and synchronization has been flawless. Although I’ve only been using the new version for a couple of months now, it feels better, faster, and deterministic in a way the older ones never did.
At this point, I’m cautiously optimistic that all of the problems I wrote about below are fixed and obsolete. My fingers are crossed!
I’m keeping this post up for historical reasons but I don’t think that it’s relevant anymore.
Q: I have DEVONthink Pro Office and I want to sync my home and work computers so that I can access documents in both locations. How can I do that?
A: You can’t. Give up. It won’t work reliably.
Q: No, really. How do I do that?
Longer A: Seriously, give up. It doesn’t work and you’ll just get angry and frustrated. Trust me.
I use and love DEVONthink Pro Office as a document manager. Pretty much every piece of information I come across goes into it, whether scans of utilities bills, PDFs of software manuals, Twitter messages I starred, or the complete collection of RFCs. If there’s any chance I might ever want to find something again, DTPO stores it. Its most important feature is the uncanny ability to return exactly the search results I want when I need to find something. Second only to that is its AI-powered “see also” feature: “you seem to be reading up on an obscure technical subject. You might also be interested in the author’s blog posts about it, some guy’s master’s thesis on the main algorithm, and the popular alternative version written by a teen living in a favela in São Paulo.”
It’s that good. And I’m still desperate to find anything else to replace it.
The main problem is that DTPO refuses - just flat-out digs its heels in and resists - syncing reliably for more than a few days at a time. The pattern always goes like this:
- I start off optimistic, determined that this time will be different.
- At home, I add a sync connection to Dropbox, or to my own WebDAV server which has been syncing OmniFocus and other apps successfully for years.
- I sync one of my medium-sized (2GB or so) databases to that connection.
- I select the Synchronize menu option and wait several hours as my data gets pushed up to the server.
- At work, I set up the same connection and import the database. Then I select Synchronize and wait a few hours as all my data comes back from the cloud.
- I use it for a couple of weeks until I start getting random sync errors that cause it to stop halfway through without copying across all my new documents.
- After going through all the troubleshooting tips on their forum (of which there are many because this seems to happen to a lot of people), I give up and resign myself to the dreaded “Clean Location…” button which deletes all documents off the remote server.
- I walk away from it for a few weeks so that I don’t throw my laptop out the window.
So I exaggerated a little. It is possible to reliably sync two machines running DTPO:
- Pick one to be the primary machine.
- Pick the other to be the secondary.
- Do all your editing work on the primary. When you’re happy with it, use rsync or some other file copier to nuke what’s on the secondary and make it identical to the primary, losing any changes you might’ve made there.
- If you’re at work and want to add a document, just email to yourself at home and import it into DTPO there later when you’d rather be playing with your kids, washing the dog, or doing anything else in the entire world.
That’s how you reliably sync DTPO. Anything else is just a ticking time bomb.
More Shoe Fails
I had a wonderful experience buying new Rockport shoes from Brown Brothers Shoes in Alameda a couple of months ago, to the point that I wrote a gushing Yelp review and told all my friends to go there.
Oops.
My two-month-old Rockport shoes (which I wear only to work at my desk job) already need to be re-soled. The hard rubber heels have worn through so that now I’m walking on the soft foam cushion, and that can’t possibly last too long. I took them back to the same store and found that they’re a lot better at selling shoes than at helping customers.
First, the salesman said that it was probably because I wear arch supports in them. That would seem ridiculous even if they weren’t the insoles that I bought from their own store at their own suggestion. Next, he recommended a local shoe shop and sent me packing. I asked if they sold other, more durable walking shoes, like some I could wear from my bus stop to the office and still have them last more than two months. The salesman said that no, these are the best.
My shoes are in the shop now and I should have an estimate for fixing them by Monday. Hopefully it’ll be cheap enough that I can have them to wear for a few weeks while I shop for replacements. I don’t know what they will be, but they won’t be Rockports and I won’t be getting them at Brown Brothers.
To Sell A Car
In the process of moving to another state, we decided to sell my car to some friends. This turned out to be much harder than anticipated.
I admit that this is entirely my fault and I deserve to be made fun of for it, but we couldn’t find the title. It could be that the bank which financed the loan never sent it to us. It could be that it’s in our safe deposit box in our last city and that I’ll find it next month when I go back for the rest of our stuff. Or maybe I’m just a bad document caretaker and I lost it along the way. I don’t know. But the end result is that we don’t have the title and needed to have a duplicate issued before we can sell the car.
Late May
I called the county clerk’s office to ask how to apply for a duplicate title. The clerk was very helpful and friendly, and offered to look up the necessary information while I was on the phone. I gave her my car’s VIN and my personal information, and she came back with the unwelcome news that the bank still had a collateral lien on the car. I pointed out that I bought it used in 2000 and didn’t have a 12-year loan on a used Oldsmobile, and that I hadn’t been arrested for chronic non-payment of the loan. She laughingly agreed that I’d clearly paid it off, but needed a notarized lien release from the financing bank before she could issue a new title.
When I tried to find contact information for that bank, I discovered they had been acquired by another bank in 2004 and no longer existed.
OK. So.
Early June
I called the new bank, Regions, and explained the situation. They were more pleasant and easier to work with than I’d feared, but couldn’t find any information about my paid-off-9-years-ago loan from their subsidiary. They took all my information, though, and agreed to send a lien release if they couldn’t find proof that I still owed them money. That seemed perfectly fair and reasonable — from a bank! — and I sat back to wait for the letter to arrive.
It didn’t arrive.
Late June
I called Regions again. They were missing some information from the lien release application form (but weren’t sure exactly which information) and needed to re-file it. Given how nice they were and that I wasn’t even their customer any more, I didn’t protest or complain too much.
July
A couple of week later, the official, notarized lien release came in the mail. The VIN wasn’t quite identical to the one I gave them, but I hoped the county clerk would call it “good enough” and accept the note.
Now we were ready to apply for the replacement title. The state’s form required that Jen and I both have our signatures notarized, so on a sunny Saturday, we drove to a nearby UPS Store and paid up. We stuffed the lien release letter, the application, and a check for $14 in an envelope and mailed it to the county clerk’s office.
August
Not a peep from the county clerk. I didn’t rush things because, well, government office… But after a few weeks of silence, I called to check on the application.
The county clerk never received it.
The notarized application? The check? The necessary, certified original copy of the lien release? Lost forever to the mail system.
I asked the clerk if I could just take the car out back and burn it, as that might be the easiest way to dispose of it. She asked me to please not to.
I sheepishly called Regions again to explain the situation, apologize profusely, and to ask them to please send me yet another copy of the lien release. They cheerfully agreed to and collected all my information to fill out the request form.
I called US Bank to cancel my lost check and they told me there was a $30 change to stop payment on a $14 note. I told them not to bother and that I’d take my chances.
Now
And that’s where it stands. All I wanted to do is sell my car, and it’s involved the county clerk, three banks (one of them out of business), a UPS Store, and the post office. As of today, I’m no closer to the goal than I was two months ago.
As a side note: yeah, it was my fault for losing the original title (if I ever even had it). But I wouldn’t have been able to transfer the title to the new owners without the lien release anyway, so this was destined to be a pain in the butt in any case.
Applecareless
While I almost never buy extended warranties, conventional wisdom is that you should always buy AppleCare for an Apple laptop. You have up to a year after buying your laptop to purchase the extended coverage. At a high level, you’re basically buying an insurance policy for a piece of hardware with a specific serial number. Why does Apple make this so difficult?
I bought my MacBook Pro directly from Apple’s website. Here’s how AppleCare purchase should work:
- I log in to their store website.
- I view my order history and find my laptop.
- Apple has my MacBook Pro’s serial number on file with this order, and they also have a list of equipment covered by AppleCare. Since my laptop isn’t already covered, the site displays a “Buy AppleCare” button next to it.
- I click the “Buy AppleCare” button, choose to use my billing information that Apple already has on file, and click “Buy it now”.
- I get a confirmation email and move on to other things.
A lot of people bought their laptops through other sources, like local dealers, chain retail stores, and so on. Since Apple might not have any record of their purchase, here’s how that process should work:
- A customer visits Apple’s store website.
- Under “Mac Accessories”, they click “AppleCare”.
- They see a new form titled “What’s your Mac’s serial number?” and a link to how to find that information.
- When the user enters their serial number, the website looks up that part information and selects the appropriate AppleCare plan for their hardware.
- They add the plan to their cart and check out normally.
- The user gets a confirmation email and moves on to other things.
In reality, the process is far less polished and, well, un-Apple-like:
- I logged into their store website and looked for a process like the one I described above.
- When that failed to materialize, I browsed around until I found the AppleCare plans in the store.
- After some rooting around, I found the correct plan and added it to my cart.
- I was given the option of picking my plan up in an Apple Store or having it mailed to me. Wait, what? Pickup? Mail? For a warranty? Fine — mail it.
- After a couple of days, my AppleCare plan arrived in the mail. It came in a large cardboard box with a tiny cardboard box inside it. The tiny box contained some printed material and a registration number, but no Apple stickers or anything else I’d actually want.
- Per instructions, I went to a separate section of the Apple website and entered my laptop’s serial number (which they already have on file from when I bought it last year!) and the AppleCare registration number (which they already have on file from when I bought it a few days earlier!).
- I agreed to the Terms of Service, which were identical to the now-completely-unnecessary printed copy that came in the box.
- After submitting those numbers, Apple asked if I wanted my coverage certificate sent by email or by postal service. “Telegraph” and “carrier pigeon” were not available options, so I chose email.
- Apple informed me that I’d successfully completed my application, that my registration was now in progress, and that I would receive my certificate when they had finished verifying my registration.
- That was over 12 hours ago. I didn’t get any kind of confirmation email, but my browser history helped me find the status page so I could check in on it today. It’s still stuck at “Registration in progress”, presumably while Gertrude from Accounts finds my punchcard in the filing cabinet.
I’d probably shrug the ordeal off if I were dealing with Best Buy, Microsoft, or some other company not known for their customer service. But Apple? This was the opposite of the kind of experience they usually provide and I’m disappointed that the process was so clumsy.
Omaha World Herald Makes School Remove Christmas Message
The Omaha World-Herald published a story about a Lincoln public high school who wrote “Remember the Reason for the Season” on their electronic bulletin board in front of the building. The ACLU contacted the school’s principal to request that the message be removed, and the school complied.
I can understand why some parents might not want that sign above the school. While I don’t personally have a problem with it, I’d feel uncomfortable if my kids’ school ran a similar sign that appeared to endorse Islam, Hinduism, or other religions. And as it turns out, the high school in question does have Jewish and Muslim students whose parents probably weren’t thrilled with the message.
Buried in the article, though, was an interesting nugget:
The ACLU was alerted to the sign by a World-Herald reporter who called to ask if anybody had complained about it. The marquee is along a well-traveled city street near the school.
Although many travelers had seen the sign in their daily commute, there were no complaints or any other evidence of offense until OWH’s own reporter triggered the investigation and created the story. I think the school did the right thing in not choosing one religion over another, but I think the newspaper was completely wrong to spark a controversy where none apparently existed.
On Generated Versus Random Passwords
I was reading a story about a hacked password database and saw this comment where the poster wanted to make a little program to generate non-random passwords for every site he visits:
I was thinking of something simpler such as “echo MyPassword69! slashdot.org|md5sum” and then “aaa53a64cbb02f01d79e6aa05f0027ba” using that as my password since many sites will take 32-character long passwords or they will truncate for you. More generalized than PasswordMaker and easier to access but no alpha-num+symbol translation and only (32) 0-9af characters but that should be random enough, or you can do sha1sum instead for a little longer hash string.
I posted a reply but I wanted to repeat it here for the sake of my friends who don’t read Slashdot. If you’ve ever cooked up your own scheme for coming up with passwords or if you’ve used the PasswordMaker system (or ones like it), you need to read this:
DO NOT DO THIS. I don’t mean this disrespectfully, but you don’t know what you’re doing. That’s OK! People not named Bruce generally suck at secure algorithms. Crypto is hard and has unexpected implications until you’re much more knowledgeable on the subject than you (or I) currently are. For example, suppose that hypothetical site helpfully truncates your password to 8 chars. By storing only 8 hex digits, you’ve reduced your password’s keyspace to just 32 bits. If you used an algorithm with base64 encoding instead, you’d get the same complexity in only 5.3 chars.
Despite what you claim, you’re really much better off using a secure storage app that creates truly random passwords for you and stores them in a securely encrypted file. In another post here I mention that I use 1Password, but really any reputable app will get you the same protections. Your algorithm is a “security by obscurity” system; if someone knows your algorithm, gaining your master password gives them full access to every account you have. Contrast with a password locker where you can change your master password before the attacker gets access to the secret store (which they may never be able to do if you’ve kept it secure!), and in the worst case scenario provides you with a list of accounts you need to change.
I haven’t used PasswordMaker but I’d apply the same criticisms to them. If an attacker knows that you use PasswordMaker, they can narrow down the search space based on the very few things you can vary:
- URL (the attacker will have this)
- character set (dropdown gives you 6 choices)
- which of nine hash algorithms was used (actually 13 — the FAQ is outdated)
- modifier (algorithmically, part of your password)
- username (attacker will have this or can likely guess it easily)
- password length (let’s say, likely to be between 8 and 20 chars, so 13 options)
- password prefix (stupid idea that reduces your password’s complexity)
- password suffix (stupid idea that reduces your password’s complexity)
- which of nine l33t-speak levels was used
- when l33t-speak was applied (total of 28 options: 9 levels each at three different “Use l33t” times, plus “not at all”)
My comments about the modifier being part of your password? Basically you’re concatenating those strings together to create a longer password in some manner. There’s not really a difference, and that’s assuming you actually use the modifier.
So, back to our attack scenario where a hacker has your master password, username, and a URL they want to visit: disregarding the prefix and suffix options, they have 6 * 13 * 13 * 28 = 28,392 possible output passwords to test. That should keep them busy for at least a minute or two. And once they’ve guessed your combination, they can probably use the same settings on every other website you visit. Oh, and when you’ve found out that your password is compromised? Hope you remember every website you’ve ever used PasswordMaker on!
Finally, if you’ve ever used the online version of PasswordMaker, even once, then you have to assume that your password is compromised. If their site has ever been compromised — and it’s hosted on a content delivery network with a lot of other websites — the attacker could easily have placed a script on the page to submit everything you type into the password generation form to a server in a distant country. Security demands that you have to assume this has happened.
Seriously, please don’t do this stuff. I’d much rather see you using pwgen to create truly random passwords and then using something like GnuPG to store them all in a strongly-encrypted file.
The summary version is this: use a password manager like 1Password to use a different hard-to-guess password on every website you visit. Don’t use some invented system to come up with passwords on your own because there’s a very poor chance that we mere mortals will get it right.
Stop The E Parasite Act
This is the letter I just sent to my representative, urging him to vote against Hollywood’s E-PARASITE Act:
Congressman Fortenberry, please vote against the appropriately-named “E-PARASITE Act” being proposed by Rep. Smith, TX. It’s the counterpart of Senate Bill S.968, the “PROTECT IP Act”.
This flawed legislation seeks to criminalize civil offenses and reverse our Constitutional presumption of innocence for the benefit of a tiny — but very vocal — coalition of Hollywood special interest groups. The Internet has brought untold billions of dollars to our economy and democracy to distant shores. Let’s not discard these advances for the benefit of a few CEOs who haven’t figured out how to do business in the new economy. Given technology legislation that’s supported by the AFL-CIO and opposed by Google, I’ll side with Google every time.
Please stop these parasites from destroying the Internet we built just so they can make a few more dollars before their obsolete business plans finish them off.
Thank you for your time,
Kirk Strauser
Norfolk, NE
Please let your own representatives know that we don’t want this terrible legislation.
Making DOS USB Images On A Mac
I needed to run a BIOS flash utility that was only available for DOS. To complicate matters, the server I needed to run it on doesn’t have a floppy or CD-ROM drive. I figured I’d hop on the Internet and download a bootable USB flash drive image. Right? Wrong.
I found a lot of instructions for how to make such an image if you already have a running Windows or Linux desktop, but they weren’t very helpful for me and my Mac. After some trial and error, I managed to create my own homemade bootable USB flash drive image. It’s available at http://www.mediafire.com/?aoa8u1k1fedf4yq" if you just want a premade ready-to-download file.
If you want a custom version, or you don’t trust the one I’ve made — and who’d blame you? I’m some random stranger on the Internet! — here’s how you can make your own bootable image under OS X:
Relax!
There are a lot of steps, but they’re easy! I wanted to err on the side of being more detailed than necessary, rather than skipping “obvious” steps that might not be quite so easy for people who haven’t done this before.
Download VirtualBox and install it
- Download VirtualBox. I used version 4.1.4. The version available to you today might look different but should work mostly the same way.
- Open the “VirtualBox-[some-long-number]-OSX.dmg” disk image.
- Double-click the “VirtualBox.mpkg” icon to run the installer.
- Click “Continue”.
- Click “Continue”.
- Click “Install”.
- Enter your password and click “Install Software”.
- When it’s finished copying files, etc., click “Close”.
Download FreeDOS and create a virtual machine for it
- Download the FreeDOS “Base CD” called “fdbasecd.iso”. Note: the first mirror I tried to download from didn’t work. If that happens, look around on the other mirrors until you find one that does.
- Open your “Applications” folder and run the “VirtualBox” program.
- Click the “New” button to create a new virtual machine. This launches the “New Virtual Machine Wizard”. Click “Continue” to get past the introduction.
- Name your new VM something reasonable. I used “FreeDOS”, and whatever name you enter here will appear throughout all the following steps so you probably should, too.
- Set your “Operating System” to “Other”, and “Version” to “DOS”. (If you typed “FreeDOS” in the last step, this will already be done for you.) Continue.
- Leave the “Base Memory Size” slider at 32MB and continue.
- Make sure “Start-up Disk” is selected, choose “Create new hard disk”, and continue.
- Select “File type” of “VDI (VirtualBox Disk Image)” and continue.
- Select “Dynamically allocated” and continue.
- Keep the default “Location” of “FreeDOS”.
- Decision time: how big do you want to make your image? The full install of FreeDOS will take about 7MB, and you’ll want to leave a little room for your own files. On the other hand, the larger you make this image, the longer it’ll take to copy onto your USB flash drive. You certainly don’t want to make it so large that it won’t actually fit on your USB flash drive. An 8GB nearly-entirely-empty image will be worthless if you only have a 2GB drive. I splurged a little and made my image 32MB (by clicking in the “Size” textbox and typing “32MB”. I hate size sliders.). Click “Continue”.
- Click “Create”.
- Make sure your new “FreeDOS” virtual machine is highlighted on the left side of the VirtualBox window.
- On the right-hand side, look for the section labeled “Storage” and click on the word “Storage” in that title bar.
- Click the word “Empty” next to the CD-ROM icon.
- Under “Attributes”, click the CD-ROM icon to open a file chooser, select “Choose a virtual CD/DVD disk file…”, and select the FreeDOS Base CD image you downloaded at the beginning. It’ll probably be in your “Downloads” folder. When you’ve selected it, click “Open”.
- Back on the “FreeDOS — Storage” window, click “OK”.
Install FreeDOS
- Back on the main VirtualBox window, near the top, click “Start” to launch the virtual machine you just made.
- A note about VirtualBox: when you click the VM window or start typing, VirtualBox will “capture” your mouse cursor and keyboard so that all key presses will go straight to the VM and not your OS X desktop. To get them back, press the left [command] key on your keyboard.
- At the FreeDOS boot screen, press “1” and [return] to boot from the CD-ROM image.
- Hit [return] to “Install to harddisk”.
- Hit [return] to select English, or the up and down keyboard arrow keys to choose another language and then [return].
- Hit [return] to “Prepare the harddisk”.
- Hit [return] in the “XFDisk Options” window.
- Hit [return] to open the “Options” menu. “New Partition” will be selected. Hit [return] again. “Primary Partition” will be selected. Again, [return]. The maximum drive size should appear in the “Partition Size” box. If not, change that value to the largest number it will allow. Hit [return].
- Do you want to initialize the Partition Area? Yes. Hit [return].
- Do you want to initialize the whole Partition Area? Oh, sure. Press the left arrow key to select “YES”, then hit [return].
- Hit [return] to open the “Options” menu again. Use the arrow keys to scroll down to “Install Bootmanager” and hit [return].
- Press [F3] to leave XFDisk.
- Do you want to write the Partition Table? Yep. Press the left arrow to select “YES” and hit [return]. A “Writing Changes” window will open and a progress bar will scroll across to 100%.
- Hit [return] to reboot the virtual machine.
- This doesn’t actually seem to reboot the virtual machine. That’s OK. Press the left [command] key to give the mouse and keyboard back to OS X, then click the red “close window” button on the “FreeDOS [running]” window to shut it down. Choose “Power off the machine” and click “OK”.
- Back at the main VirtualBox window, click “Start” to re-launch the VM.
- Press “1” and [return] to “Continue to boot FreeDOS from CD-ROM”, just like you did before.
- Press [return] to select “Install to harddisk” again. This will take you to a different part of the installation process this time.
- Select your language and hit [return].
- Make sure “Yes” is selected, and hit [return] to let FreeDOS format your virtual disk image.
- Proceed with format? Type “YES” and hit [return]. The format process will probably finish too quickly for you to actually watch it.
- Now you should be at the “FreeDOS 1.0 Final Distribution” screen with “Continue with FreeDOS installation” already selected. Hit [return] to start the installer.
- Make sure “1) Start installation of FreeDOS 1.0 Final” is selected and hit [return].
- You’ll see the GNU General Public License, version 2 text. Follow that link and read it sometime; it’s pretty brilliant. Hit [return] to accept it.
- Ready to install the FreeDOS software? You bet. Hit [return].
- Hit [return] to accep the default installation location.
- “YES”, the above directories are correct. Hit [return].
- Hit [return] again to accept the selection of programs to install.
- Proceed with installation? Yes. Hit [return].
- Watch in amazement and how quickly the OS is copied over to your virtual disk image. Hit [return] to continue when it’s done.
- The VM will reboot. At the boot screen, press “h” and [return] to boot your new disk image. In a few seconds, you’ll see an old familiar “C:" prompt.
- Press the left [command] key to release your keyboard and mouse again, then click the red “close window” icon to shut down the VM. Make sure “Power off the machine” is selected and click “OK”.
Convert the VirtualBox disk image into a “raw” image
- Open a Terminal.app window by clicking the Finder icon in your dock, then “Applications”, then opening the “Utilies” folder, then double-clicking “Terminal”.
- Copy this command, paste it into the terminal window, then hit [return]:
/Applications/VirtualBox.app/Contents/Resources/VirtualBoxVM.app/Contents/MacOS/VBoxManage internalcommands converttoraw ~/"VirtualBox VMs/FreeDOS/FreeDOS.vdi" ~/Desktop/freedos.img
This will turn your VirtualBox disk image file into a “raw” image file on your desktop named “freedos.img”. It won’t alter your original disk image in any way, so if you accidentally delete or badly damage your “raw” image, you can re-run this command to get a fresh, new one.
Prepare your USB flash drive
- Plug your USB flash drive into your Mac.
- If your Mac can’t the drive, a new dialog window will open saying “The disk you inserted was not readable by this computer.” Follow these instructions:
- Click “Ignore”.
- Go back into your terminal window and run this command:
diskutil list
- You’ll see a list of disk devices (like “/dev/disk2”), their contents, and their sizes. Look for the one you think is your USB flash drive. Run this command to make sure, after replacing “/dev/disk2” with the actual name of the device you picked in the last step:
diskutil info /dev/disk2
- Make sure the “Device / Media Name:” and “Total Size:” fields look right. If not, look at the output of
diskutil list
again to pick another likely candidate and repeat the step until you’re sure you’ve picked the correct drive to complete eradicate, erase, destroy, and otherwise render completely 100% unrecoverable. OS X will attempt to prevent you from overwriting the contents of drives that are currently in use — like, say, your main system disk — but don’t chance it. Remember the name of this drive! - If your Mac did read the drive, it will have automatically mounted it and you’ll see its desktop icon. Follow these instructions:
- Go back into your terminal window and run this command:
diskutil list
- Look for the drive name in the output of that command. It will have the same name as the desktop icon.
- Look for the name of the disk device (like “/dev/disk2”) for that drive and remember it (with the same warnings as in the section above that you got to skip).
- Unmount the drive by running this command:
diskutil unmount "/Volumes/[whatever the desktop icon is called]"
- This is not the same as dragging the drive into the trash, so don’t attempt to eject it that way.
- Go back into your terminal window and run this command:
Copy your drive image onto the USB flash drive
- Go back to your terminal window.
- Run these commands, but substitute “/dev/fakediskname” with the device name you discovered on the previous section:
cd ~/Desktop; sudo dd if=freedos.img of=/dev/fakediskname bs=1m
- After the last command finishes, OS X will automatically mount your USB flash drive and you’ll see a new “FREEDOS” drive icon on your desktop.
Add your own apps to the image
- Drag your BIOS flasher utility, game, or other program onto the “FREEDOS” icon to copy it onto the USB flash drive.
- When finished, drag the “FREEDOS” drive icon onto the trashcan to unmount it.
Done.
- You’re finished. Use your USB flash drive to update your computer’s BIOS, play old DOS games, or do whatever else you had in mind.
- Keep the “freedos.img” file around. If you ever need it again, start over from the “Prepare your USB flash drive” section which is entirely self-contained. That is, it doesn’t require any software that doesn’t come pre-installed on a Mac, so even if you’ve uninstalled VirtualBox you can still re-use your handy drive image.
Taken To The Cleaners By Abe's Detailing
I read a nice newspaper story a while ago about Abe’s Detailing in Norfolk, NE. When I wanted to have Jen’s minivan detailed as a present, I thought I’d give Abe’s a try and made an appointment for the $45.99 “express detail”. When we picked it up later, the van looked nice, but they wanted to charge us for the $159.99 “presidential detail” that they performed instead.
I told the employee that I’d ordered the cheaper package. He said I must have talked to his brother and that his brother wrote it down wrong, and still wanted me to pay the full price for the wrong job.
I will never darken the doorsteps of Abe’s Detailing in Norfolk again. If you choose to do so, I highly recommend you get a written estimate in advance.
Guest Post By Gabby It Snowed
It snowed!I went outside today and played! bet I would have stayed out there if it had snowed more and if my feet didn’t freeze, I would have stayed there longer!
Guest Post By Gabby Crazy Squirrel
I was at grandma’s house last weekend, and there was a power outage.Turns out, it was a squirrel chewing on the power lines, and got electrocuted.We found out when my Uncle Brian came in saying,“There’s a dead squirrel that was chewing on the power lines.“When I heard that I fell on the floor laughing!That squirrel is nuts!Or at least was nuts…
Guest Post By Gabby There Here
Grandma and Grandpa got here yesterday!My camera is working again to!I took a picture of a mirror,and when I saw the picture I saw me and my camera flashing!Anyways,I am very exited!They’re staying 2 weeks!
Open Letter To KCAU TV
As of mid-August, I can’t watch the local ABC affiliate TV channel over my satellite dish because they tried to jack up the rates they charge Dish Network for carrying their channel. Never mind that their advertisers pay them by the number of viewers, regardless of whether that’s by antenna, cable, or satellite. Dish Network could almost get away with asking KCAU to pay them for the task of handling all the transmission details. Anyway, here’s a letter I wrote to KCAU’s president:
As you mentioned on your website, I could watch your programming over-the-air for free. While your position regarding Dish Network makes sense on the surface, it falls apart quickly. They are redistributing your signal at no cost to you while you still collect money from advertisers. Frankly, they’re doing you a favor by handling your broadcasting. Imagine that you could still get the same advertising revenue without having to pay for transmitters and the associated electricity and personnel. Nice, huh?
Since you’re not directly paid by viewers regardless of whether they watch by rabbit ears or by satellite dish, you can hardly claim to be losing money with the latter. In the mean time, your viewership is lower by the number who can no longer receive your signal (and you’re crazy if you think I’d downgrade from a crystal-clear satellite signal and DVR to a snowy analog antenna). The other local network affiliates must be rubbing their hands together with glee as you throw away your audience.
Finally, consider that a five-minute Internet search returns downloadable versions of current programming. While I personally don’t (yet) consider that a viable option to local programming, as of today that would be the easiest course for a lot of your viewers who have been cut off.
Please allow Dish Network to resume broadcasting your signals at no charge to you so that I can go back to watching “Lost”. Thank you.
Sincerely, Kirk Strauser
I have no particular feelings for either company, but Dish Network’s position in this one case seems by far the most reasonable of the two.
Guest Post By Gabby I Went To The Lake
We took a trip to Grandma’s house on Thursday last week, and on Saturday we went to the lake.A couple hours before we left we went boating.It took a long time for me and my friend to get off the tube. And when I did get off the tube I asked the boat driver if we could stop the boat and swim fora while. When we left the lake it was a long drive back to grandma’s house, and I almost fell asleep!I had half a hamburger and went to bed.