Today I used our Droid AI to analyze a vendor’s security questionnaire response. It was one of the best experiments I’ve tried so far. I wrote:
We’re considering a new vendor, Foo Corp. I’ve described what they do in “foocorp-description.md”. I sent them a security questionnaire (“Questionnaire.doc”) and asked them to fill it out. All the other files here are their response.
Given the sensitivity of their service, does their reply seem adequate? Did they thoroughly complete their response to our questionnaire (“foocorp-response.txt”) and does it completely answer all the questions we sent to them? Are there any glaring gaps? Do their other documents support their answers?
Droid replied shortly with a detailed response identifying both the good parts and the areas of concern. It added an executive summary and a detailed list of suggestions to discuss with the vendor.
I double-checked Droid’s findings for accuracy and deleted some that didn’t seem terribly important. Then I wrote my own recommendations in my own words. It’s my job to apply my own judgment to the available information to make decisions, and I’m not oursourcing that judgment to an LLM. The AI didn’t do my job for me. Still, it saved me about a day’s worth of clerical work and made an onerous chore a lot more interesting.
I don’t ever plan to do a vendor review completely by hand again if I can help it.