Exposing the Uninvolved: Navigating Legal and Ethical Challenges of AI-Powered OSINT in Fintech Investigations

Written by Nura H – Ethical AI Ambassador at AI for Change Foundation

 

Introduction

Imagine a scenario where you post a picture on Facebook with your friends thinking nothing will happen, surely you are not expecting that your Facebook profile will be part of insurance fraud investigation. Based on a case reported by ICW Group (2024) exactly that occurred, a worker filed a claim for back and neck injury while lifting a pipe at work in the beginning of 2023. He continued his treatment for more than a year, given the minor severity of the injury this raised suspicion. While he was claiming he stopped physical activities, Facebook post from his wife’s account in early 2024 showed that he participated in bodybuilding competitions after the injury. His medical complaints and online presence led to fraud investigation by ICW Group and law enforcement. This example highlights the critical importance of OSINT in fraud investigation and prevention. However, without minimizing the benefits it has been necessary to address these questions:

     • If information is public, do companies using OSINT need a lawful basis for processing the data?
     • How do data protection laws and the AI EU Act safeguard individual privacy and rights?

Social media platforms play a crucial role in Open-Source Intelligence (OSINT) investigations, and it is essential to note that these investigations often extend beyond the primary subject and frequently involve examining the online presence of incidental individuals, such as family members, business partners, or friends. As AI-powered OSINT becomes common in Fintech investigations, it raises questions about privacy, personal data, and ethics. This article examines how Fintech companies use
AI-driven OSINT, ethical issues, individual privacy, personal data, and how EU law addresses these evolving technologies.

 

From expansion to exposure

OSINT refers to data obtained from publicly available sources including social media, forums, websites, dark web, public records, and databases. OSINT allows investigators to create digital mapping of relationships, behavioral pattern identification or revealing links to potentially fraudulent activity through tags, comments, photos, or group affiliations.
While all these sources make OSINT a powerful tool for investigation, we cannot dismiss the sensitivity and ethical concerns when used in Fintech investigations. One risk is the inclusion of incidental individuals such as: friends, business associates or social media connections. There are differing opinions on whether publicly available information can be used without individual consent; some agree with this perspective, while others oppose it. Koops (2013) argues that privacy concerns
should not be discredited entirely just because some information is public, after all, if we put something on the internet, we do not expect the whole world to see it.

 

OSINT in Fintech investigations

Traditional investigative process requires manual reviews that are necessary, but time consuming and recourse intensive. Financial environment is fast moving, fraudulent activities happen in seconds and to relay only on manual process is no longer optional. OSINT enables companies to detect, assess and respond to threats in real time before they cause harm (ShadowDragon, 2025). Companies like Maltego and ShadowDragon offer solutions for KYC, AML, internal fraud, risk assessment, cyber financial crimes, and insurance claim investigations.

 

Impact of AI on OSINT

Rapid advancements in artificial intelligence have impacted various industries, including OSINT and the automation of data collection and analysis. AI quickly analyzes large datasets to identify patterns, anomalies, and connections, saving time compared to manual investigations. Visualization techniques, including heatmaps and geospatial mapping, have advanced with the integration of AI
into OSINT, enabling analysts to navigate and understand complex data relationships more efficiently.

 

Ethical and privacy concerns in AI-Powered OSINT

As AI amplifies the power of OSINT, ethical concerns grow correspondingly, especially when investigations capture data about individuals unrelated to the primary subject. AI algorithms reflect the data they are trained on, and if the data is incomplete, flawed, or biased, the results will mirror the training data and produce misleading or inaccurate intelligence. AI-powered web scrapers automatically collect public data from social media, news sites, blogs, forums, web archives, and public databases. The automated nature of these tools means that large-scale data scraping can inadvertently infringe upon privacy principles, generating a conflict between technological capabilities and ethical responsibilities.

 

Innovation and invasion

Solove and Hartzog (2024, p. 4) argue that scraping personal data undermines nearly all key privacy principles embodied in laws, frameworks, and codes, including transparency, data minimization, and individual rights. They explain that although some legal frameworks exclude publicly available data, others such as the GDPR still provide protection, which scrapers often ignore under the assumption that public equals freely usable.

 

EU AI Act and GDPR perspective

Both the General Data Protection Regulation (GDPR) and EU Artificial Intelligence Act (AI Act) impose specific obligations on the collection, processing, and automated analysis of personal data whether gathered through proprietary systems or OSINT techniques. This becomes especially relevant when AI is used for processes such as credit scoring which the AI Act classifies as high-risk applications.

 

GDPR Perspective

GDPR regulates processing of personal data, including from public sources. It sets limits on how fintech investigators can use OSINT, especially for profiling, risk assessments, or automated decisions.

Definitions
Understanding how GDPR applies to OSINT begins with the foundational definitions set out in Article 4. These definitions clarify what qualifies as personal data, what constitutes processing, and how consent is legally understood.
     • Article 4(1)- Personal data: any information relating to an identified or identifiable natural person (data subject).
     • Article 4(2)- Processing: any operation performed on personal data, whether automated or not, including collection, recording, storage, use, or erasure.
     • Article 4(11)- Consent: any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.
Lawful processing
To lawfully process OSINT data, a valid legal basis must be established. Investigators must adhere to principles related to the processing of personal data in Article 5 (1). In most cases, legitimate interest Article 6(1)(f) is relied upon for this purpose.
Article 5(1)- Principles relating to processing of personal data
     • Lawfulness, fairness and transparency
     • Purpose limitation
    • Data minimisation
    • Accuracy
    • Storage limitation
    • Integrity and confidentiality
Article 6(1)(f)- Lawfulness of processing
     • Processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.
However, this requires a balancing test to ensure that the processing does not override the interests or rights of the individuals involved.

Profiling and automated decisions
Article 22(1) of the GDPR grants individuals the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or similarly significant effects. According to Article 22(2), this right does not apply if the decision is:
     • Necessary for entering into, or performance of, a contract between the data subject and a
data controller;
     • Is authorised by Union or Member State law to which the controller is subject and which also
lays down suitable measures to safeguard the data subject’s rights and freedoms and
legitimate interests; or
     • Is based on the data subject’s explicit consent.

However, Article 22(4) prohibits such decisions when they are based on special category data referred to in Article 9(1). These include data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation.

Such processing is only permitted if one of the exceptions under Article 9(2) applies:
     • Article 9(2)(a): the data subject has given explicit consent to the processing of those personal data for one or more specified purposes, except where Union or Member State law provide that the prohibition referred to in Article 9(1) may not be lifted by the data subject
     • Article 9(2)(g): processing is necessary for reasons of substantial public interest, on the basis of Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.

Data subject rights under the GDPR
Organizations, including fintech companies conducting investigations, must ensure that data subjects can easily exercise the following rights:
    • Article 15- Right of access by the data subject
    • Article 16- Right to rectification
    • Article 17- Right to erasure (‘right to be forgotten’)
    • Article 18- Right to restriction of processing
    • Article 20- Right to data portability
    • Article 21- Right to object
    • Article 22- Rights related to automated individual decision-making, including profiling.

GDPR applies to personal data from both private and public sources, including OSINT. Fintech investigators must ensure that any processing, especially profiling or automated decisions respects core data protection principles and is based on a valid legal basis.

 

EU AI Act perspective

Under the EU AI Act, AI systems used to evaluate financial trustworthiness or creditworthiness of individuals are classified as high-risk.

In the EU AI Act, Chapter III lists high-risk AI systems as referred to in Article 6(2), and includes:
     • Point 5(b): AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, except for AI systems used for the purpose of detecting financial fraud.
     • Point 5(c): AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

To mitigate these risks, the AI Act requires that high-risk systems adhere to requirements for highrisk AI systems such as:
     • Article 9: Risk Management System
     • Article 10: Data and Data Governance
     • Article 11: Technical Documentation
     • Article 12: Record-Keeping
     • Article 13: Transparency and Provision of Information to Deployers
     • Article 14: Human Oversight
     • Article 27: Fundamental Rights Impact Assessment
In AI-powered OSINT fintech investigations, a FRIA is essential to address privacy concerns and potential indirect harm. It ensures these systems respect individuals’ rights, even if they are not the primary subjects of the investigation. These obligations help guarantee that high-risk AI systems remain trustworthy, transparent, and uphold fundamental rights, particularly in sensitive sectors such as finance, health, and insurance.

 
Conclusion

Protecting privacy remains a cornerstone of responsible OSINT practice, firmly grounded in GDPR’s principles and increasingly reinforced by the EU AI Act. As OSINT evolves with AI-powered capabilities, it is imperative that both users and developers adhere strictly to regulatory frameworks. Only through careful alignment with legal standards can OSINT tools unlock their full potential, delivering valuable insights without compromising the rights of individuals, incidental or otherwise.
Privacy is not just a box to tick; it is a fundamental commitment that must guide every stage of OSINT
development and application.

 

Bibliography

ArtificialIntelligenceAct.eu. Chapter III: High-Risk AI System. Retrieved from ArtificialIntelligenceAct.eu: https://artificialintelligenceact.eu/

GDPR.EU.General Data Protection Regulation (GDPR). Retrieved from GDPR: https://gdpr.eu/tag/gdpr/

Hassan, N. (2025, February 26). TechTarget. Retrieved from How to enhance OSINT investigations using AI: https://www.techtarget.com/searchEnterpriseAI/tip/How-to-enhance-OSINTinvestigations-using-AI

ICW Group. (2024, August 9). Social media investigation reveals potential fraud. Retrieved from ICW Group: https://www.icwgroup.com/articles-insights/fighting-fraud/social-mediainvestigation-reveals-potential-fraud/

Koops, B.-J. (2013). Police investigations in Internet open sources: Procedural-law issues. Computer Law & Security Review, 654–665. doi: https://doi.org/10.1016/j.clsr.2013.09.004

ShadowDragon. (2025, 6 20). ShadowDragon. Retrieved from Use cases: https://shadowdragon.io/use-cases/

Solove, D. J., & Hartzog, W. (2024, July 3). The Great Scrape: The Clash Between Scraping and Privacy.
113 California Law Review (forthcoming 2025), 4. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4884485

 

 

 

Follow Naura H on LinkedIn