I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.


So the LLM can run arbitrary code against your database? Or your clients can? Both sound scary as hell!
I can’t imagine the nightmare of trying to reproduce “incorrect data” and they just send you the prompt instead of the query
That could be fixed by simply logging the prompt and code executed. Maybe also give each prompt/response a reference ID and demand that in tickets. The nightmare would be actually reading the code the AI generated.
You’re being silly. Clients can only prompt the AI and the AI has restricted read-only permissions on the database. Slap on a execution timeout to cover if the AI wrote an expensive query.
The real concern is the AI getting a query subtly wrong and giving the client bad info. That gets “covered” by some flimsy disclaimer.