There has been quite some conversation around this lately. My take: Large Language Models (LLMs) like #ChatGPT cannot be treated as an #OSINT tool simply because...they do not get the job done. On multiple levels.
Can they be utilized for 𝘴𝘮𝘢𝘭𝘭𝘦𝘳 tasks if operated with an understanding of their capabilities & limitations? Yes.
Image analysis for example is currently a pretty interesting (but limited) function.
Can it be relied on as an independent OSINT tool? Absolutely not.
LLMs that are asked to pull out data or "intelligence" often provide instead...
Incorrect answers that sound plausible (example in images).
"Poisoned" data from disinformation, misinformation and influence campaigns. And sometimes heavily biased answers.
No sources. For intelligence, professionals need to be able to evaluate the source of data, but also corroborate it through multiple sources. One=none for intelligence analysis. LLMs provide none.
and a last one...
What ChatGPT4 does not provide by stating that "it is not available to the public", can instead easily be found via *actual* OSINT Techniques.
If you are evaluating your security perimeter, it is not recommended to ask an AI-based model...rather, do the actual OSINT work or hire a professional that can thoroughly and reliably discover risky data and vulnerabilities that are publicly available - but shouldn't.