Lastly, there's the training data. I work for #AWS (so these are strictly my personal opinions). We are opinionated about the platform. We think that there are things you should do and things you shouldn't. If you have deep knowledge of anything (Microsoft, Google, NodeJS, SAP, whatever) you will have informed opinions.
The threat models that I have seen, that use general purpose models like Claude Sonnet, include advice that I think is stupid because I am opinionated about the platform. There's training data about AWS in the model that was authored by not-AWS. And there's training data in the model that was authored by AWS. The former massively outweighs the latter in a general-purpose, trained-on-the-Internet model.
So internal users (who are expected to do things the AWS way) are getting threats that (a) don't match our way of working, and (b) they can't mitigate anyway. Like I saw an AI-generated threat of brute-forcing a cognito token. While the possiblity of that happening (much like buying a winning lottery ticket) is non-zero, that is not a threat that a software developer can mitigate. There's nothing you can do in your application stack to prevent, detect, or respond to that. You're accepting that risk, like it or not, and I think we're wasting brain cells and disk sectors thinking about it and writing it down.
The other one I hate is when it tells you to encrypt your data at rest in S3. Try not to. There's no action for you to take. The thing you control is which key does it and who can use that key.
So if you have an area of expertise, the majority of the training data in any consumer model is worse than your knowledge. It is going to generate threats and risks that will irritate you.
4/fin