Commodore declared bankruptcy 25 years ago today

Commodore International declared bankruptcy on April 29, 1994, and pretty much sealed the fate of the Amiga. I couldn’t care less about Commodore, but I think we lost something special when Amiga died.

An Amiga 500

My parents bought an Amiga 1000 shortly after it launched (and then, begrudgingly, a 256KB RAM expansion a month later because otherwise you couldn’t do much with it). It was a magical machine with true preemptive multitasking at a time when DOS was normal, and years before Macs could decently run multiple programs at once. I exclusively used it and its successors into the late 90s, until it became obvious to me — probably years after it was obvious to everyone else — that I was past the end of the road and well off into the weeds. The most frustrating thing about owning one of those clearly superior machines was the bragging of PC and Mac owners when their clearly inferior systems added features I’d enjoyed for years. High-res color graphics! Speech synthesis! Sampled sound! A usable GUI! Shared libraries! An object-oriented plugin system! Cross-application scripting! And most importantly, that gorgeous multitasking! Yes, yes, that’s great; I’d had those for a decade before they became popular on other personal computers.

Other people have written better than I possibly could, and at great length, about the many ways that Commodore managed to screw up their golden child. I was only peripherally aware of all that at the time. But I know that they had something amazingly special that earned a fiercely loyal cult following, and I truly believe we lost something good when they died.

RIP, Amiga. You were loved.

A standard for describing a site's password rules

There’s not a universal standard for what a valid password on a website must look like. Some sites allow you to use any four letters. Others require at least twenty characters, including at least one numeric digit and one “special character” (aka punctuation). Even when using a password manager, the process of creating a good one looks a lot like:

  • Turn the password manager’s strength settings all the way up and generate a password.
  • The website replies “passwords can’t be more than 20 characters long”.
  • Adjust the length down to twenty. Generate a new one and send it to the website.
  • The website replies “passwords may only contain the special characters ‘$_!#’.
  • Adjust the number of symbols down to zero. Generate. Try again.
  • The website replies “passwords must contain at least two special characters”.
  • Turn the number of symbols back up to two. Click “generate” until you a password that contains punctuation from “$”, “_”, “!”, and “#”, but nothing else. Generate. Try again.
  • …and repeat until you’ve appeased the website’s rules.

I propose instead that websites should document their password rules in a standardized, machine-readable manner. For instance, suppose that each site hosted a file in a pre-defined location, like /.well-known/password-rules.yaml, in a format such as:

max_length: 64
min_length: 8
allowed_symbols: "$#@!"
min_symbols: 1
min_upper: 1
min_lower: 1
min_digits: 1
matches: "^[a-z]+(.*)+$"

Then tools like 1Password could look for that file and tune their settings to suit. The new process for creating a password would look like:

  • Tell 1Password to generate a password for the site you’re currently looking at.
  • It fetches the rules file, interprets it, creates a password that satisfies all the requirements, and pastes it in the password field on the site.

Further suppose that the standard defined the calling conventions of a REST endpoint for changing passwords, and the rules file included that URL like:

change_url: /ajax/change_my_password

Wouldn’t it be just lovely if 1Password could automatically update every such website on a monthly basis, or whenever a site announces a security breach?

Ringing the bird

I was on an early morning walk and came across a guy staring at the telephone wires. As I approached, I caught the distinct aroma of marijuana. I turned to see what he might be looking at, and he held a finger to his lips to quiet me. He whispered, “there’s a mockingbird up there. If you listen, he’ll ring like a bell.” Sure, buddy.

So we stood there in silence, and then the little bird opened his mouth and sang chimes to us. He rang like a bell.

The stranger and I looked at each other, then smiled and laughed as we went our separate ways. That was a nice way to start a day.

Heavy traffic is not a DDoS

Ajit Pai claimed that when the FCC asked citizens to comment on Net Neutrality, their website was attacked with a distributed denial of service, or DDoS. I’ve heard many of his defenders claim that an overwhelming number of people trying to use the website to comment was in fact a DDoS. This is a lie.

It was not a kind of DDoS. Words mean things, and “DDoS” specifically means a coordinated attack. What the FCC experienced is what we call “heavy traffic”. A car analogy:

  • “Heavy traffic” is rush hour on the freeway.
  • “DDoS” is a mass protest with people physically blocking lanes on the road.

Even though the end result might be everything moving slower than desired, if you’re stuck in traffic but you tell your boss that you’re late to work because a protest blocked the street, you’re exactly as much a liar as Ajit Pai was when he perjured himself to Congress.

Happy birthday to me!

I registered Honeypot.net on July 1, 1998, so today is its twentieth birthday. We’ve had fun, little domain. Here’s to twenty more!

"At a Crucial Juncture, Trump's Legal Defense Is Largely a One-Man Operation"

At a Crucial Juncture, Trump’s Legal Defense Is Largely a One-Man Operation — The New York Times

Highlights:

Joseph diGenova, a longtime Washington lawyer who has pushed theories on Fox News that the F.B.I. made up evidence against Mr. Trump, left the team on Sunday. He had been hired last Monday, three days before the head of the president’s personal legal team, John Dowd, quit after determining that the president was not listening to his advice.”

Also:

“Mr. Dowd had concluded that there was no upside and that the president, who often does not tell the truth, could increase his legal exposure if his answers were not accurate.”

Jokes about “the best people” aside, it sounds like genuinely competent people want nothing to do with the fiasco in DC.

How many minutes of Internet are you paying for each month?

If you pay for a 100Mbps cable connection to the Internet and your plan sets a 300GB data cap, you can use your connection at full speed for 8.3 hours per month before hitting overuse charges.

If your cell phone plan supports 50Mbps LTE speeds and has a 10GB data cap, you’re only allowed to use it at full speed for 33 minutes per month.

I think it’s deceptive for an ISP to advertise an Internet connection’s speeds without disclosing how much you can actually use it without being disconnected or racking up extra fees. I’ve written to my senators asking them to introduce legislation to protect customers from this misleading and predatory practice:

I believe that all Internet service providers should be required to disclose, as part of their advertising, how many minutes you may use their service at full speed without hitting data caps.

For instance, a cable company advertising “100 megabits!” but imposing a 300GB data cap only allows their users to download information for about 8 hours per month. A cell phone company that advertises fast 50 megabit LTE speed but has a 10GB data limit only gives their customers about 33 minutes per month of full speed usage.

I believe that simultaneously advertising fast Internet connections while only allowing customers to use it for a short amount of time each month is highly deceptive and should be illegal. Please introduce truth in advertising legislation requiring ISPs to disclose what portion of time customers on a typical plan would be allowed to use an Internet service being advertised.

I don’t reasonably expect anything to come of this, but I’m going to try anyway.

Airlines Restrict 'Smart Luggage' Over Fire Hazards Posed By Batteries

Airlines Restrict ‘Smart Luggage’ Over Fire Hazards Posed By Batteries : The Two-Way : NPR:

“Beginning Jan. 15, customers who travel with a smart bag must be able to remove the battery in case the bag has to be checked at any point in the customer’s journey. If the battery cannot be removed, the bag will not be allowed,” American said in a statement on Friday. The same day, Delta and Alaska announced similar policies on their flights.

American’s policy dictates that if the bag is carry-on size, passengers can take the luggage onboard, so long as the battery can be removed if needed. If passengers need to check the bag, the battery must be removed and carried onboard. But if the bag has a nonremovable battery, it can’t be checked or carried on.

An FAA spokesman told The Washington Post that the airlines’ policies are “consistent with our guidance that lithium-ion batteries should not be carried in the cargo hold.”

Last month I wrote: “Listening to an ad for luggage with a built in USB charger, which may be the worst idea ever. Now your suitcase can grow obsolete. What if it breaks? Or a bigger battery comes along? And you always have the weight penalty even when you don’t need it.” I think we can all agree now that this is a terrible idea for many reasons.

App subscriptions must offer value

Software authors are increasingly switching to subscription models to make their work “sustainable”. Too often they’re forgetting to make a value proposition that helps their customers. Here’s a hint: if you have to write a Medium post explaining why I should support your new business model, you’re doing it wrong.

I understand why authors can’t afford to write an app and then offer free upgrades for the following decade. That’s a great way to cut off the income supply that keeps new development happening. Neither authors nor their customers want that! Creators want to be compensated for their time and users want up-to-date software with competitive features. Buying an application one time shouldn’t come with the expectation that I should get all the newest work for free, forever.

The alternative is not that purchasers are an endless font of cash and goodwill, though. A recent trend is for annual app subscriptions to cost roughly the same as buying a copy of the app each year. In the real world, no one does this and it’s not sustainable. If you want to move to a subscription model, your price has to make sense as a value proposition by itself. Customers don’t care about pretty words and guilt trips in long blog posts. They want a good deal from their own perspective.

From a customer’s point of view, the math is simple: your target annual fee is the previous price divided by the number of years I would have expected to keep a paid copy before upgrading. For instance, if your upgrades used to cost $40, and you released new paid major versions every two years, I can be convinced to subscribe at a rate of $20 per year. Anything beyond that is a price increase, and that increase must be justified exactly as if you were selling me a new copy instead of a monthly rental. That is, you can’t tack on “…and now with cloud sync!”, or “…for teams!”, or pack it with other features I won’t care about and expect that I’ll happily pay twice the old price.

1Password did this right: although their new “1Password Families” service costs more than their old software licenses, it offers lots of features that genuinely make it more useful. Smile Software did this wrong: their new annual TextExpander subscription service costs about the same as their previous one-time software licenses, but all of the new features were geared to a workflow that could not have been less attractive to me if they’d tried. They were asking me to pay a lot more and get nothing of value to me in return.

In summary, you want to make money. I want you to run a profitable business so that you’ll continue to make the software I enjoy. But you have to remember that while your app is your labor of love, for me it’s just a tool I use for work or play and it’s not my life’s ambition. It’s the one among several competitors that had the best value proposition. If that ever changes, I’ll re-evaluate and move on to one of the others. I’m frustrated that this is 101-level business class stuff, and we shouldn’t need to keep learning this lesson anew.

Introducing metric quantity units for computing

In computing, metric-sounding prefixes almost universally refer to sizes expressed as powers of two:

  • kilo = 2^10 = 1024
  • mega = 2^20 = 1,048,576
  • giga = 2^30 = 1,073,741,824
  • …and so on.

In 1998, the IEC incorrectly voted to change that, and it’s time to fix this mistake.

1K = 1K

Using “k” to mean 2^10 dates back to at least 1959, with Gordon Bell of IBM ("Architecture of the IBM System/360"), Gene Amdahl of DEC ("Instrumentation Techniques in Nuclear Pulse Analysis"), and others standardizing them as units in 1964. Since that time, binary units have been used pervasively to describe quantities. Well, almost. Hard drive manufacturers started using the smaller, metric homonyms to describe their products with larger numbers than their competitors. That is, a company could market their 50MB hard drive as 52 (metric) MB so that it sounded larger than anyone else’s 50MB drive. This caught on like wildfire because marketing loved it, even though binary sizes were correctly used for everything else.

The International Electrotechnical Commission decided to weigh in, and in 1998 (the same year that gave us SOAP) decided that the electronics industry should change their standard units to use a new system. Henceforth metric-sounding prefixes would start referring to decimal sizes, like:

  • kilo = 10^3 = 1,000
  • mega = 10^6 = 1,000,000
  • giga = 10^9 = 1,000,000
  • etc.

This was bad enough, because those numbers don’t naturally correspond to anything computer-related except hard drive sizes. For instance, the IEC would have us incorrectly believe that a 32-bit address could refer to 4.29GB of RAM. No. Worse, though, were the fictional binary units they invented to replace the actual industry standard. From then on, we were to say that:

  • 1,024 bytes = 1 KiB = 1 kibibyte
  • 1,048,576 bytes = 1 MiB = 1 mebibyte
  • 1,073,741,824 = 1 GiB = 1 gibibyte
  • and I lack the stomach to continue.

Donald Knuth said:

The members of those committees deserve credit for raising an important issue, but when I heard their proposal it seemed dead on arrival — who would voluntarily want to use MiB for a maybe-byte?! […] I am extremely reluctant to adopt such funny-sounding terms; Jeffrey Harrow says “we’re going to have to learn to love (and pronounce)” the new coinages, but he seems to assume that standards are automatically adopted just because they are there.

Knuth, as always, was right. The awful-sounding standard was appropriately mocked and ignored. Western Digital settled a lawsuit in 2006 for marketing an 80 billion byte hard drive as 80 gigabytes, with the plaintiff citing the fact that even then — 8 years after the “standard” was passed — essentially no one used metric sizes to refer to quantities.

A few well-meaning but misled companies have started using the metric units. For instance, Apple’s macOS describes hard drive sizes in metric units (but inconsistently lists RAM quantities in correct binary sizes such as 16GB). Before this snowballs out of control, we need to reach a real industry-wide standard that engineers will actually use. I assert that:

  • Computing, as do all other industries, has its own jargon. Our mouse is not a mammal, and our prefixes don’t need to mirror the metric system.
  • The current IEC standard looks terrible, sounds terrible, and is nearly universally avoided.
  • The great thing about standards is that we can make our own and start using it.

The binary kilobyte, megabyte, and gigabyte are our heritage and our vocabulary. In the realm of computing, we own those terms. Therefore, I propose a new standard for describing storage quantities in computing. Effective immediately, metric-sounding prefixes in computing officially refer to their binary sizes as they have since IBM and DEC claimed them in the 1960s. Furthermore, metric sizes will use the new “tri” infix notation — abbreviated “t” — like so:

  • 1,000 bytes = 1 KtB = 1 kitribyte
  • 1,000,000 bytes = 1MtB = 1 metribyte
  • 1,000,000,000 bytes = 1GtB = 1 gitribyte
  • and so on for tetribyte, petribyte, extribyte, and so on.

Let people who want to use different units be the ones to adopt them. And frankly, “metribyte” sounds a lot better than “mebibyte” ever will.