Acquiring fraud Deep learning Temporal validation Cost sensitive metrics Operational governance

Acquiring fraud detection. Less friction. Less loss. Governed AI.

CyberAntifraud is my applied PhD hub at NOVA IMS, focused on fraud detection in merchant acquiring. I work at the intersection of fraud analytics, deep learning, and operational delivery. The goal is simple. Reduce false positives without losing fraud capture, and reduce false negatives without breaking governance.

Positioning
This hub publishes protocols, templates, and reproducible artefacts. No sensitive operational data is disclosed. Public examples use synthetic or anonymised aggregates.

Temporal Integrity Gate

time aware
Blocked splits, rolling windows, leakage control, drift stress testing.

Cost and Friction Engine

net benefit
Cost curves, threshold policy, friction budget, operational capacity constraints.

Governance Trace Layer

audit ready
Model cards, release gates, monitoring evidence, rollback triggers, oversight.

Data disclosure policy. No sensitive operational data is published. Public examples rely on synthetic or anonymised aggregates.

Offer for fraud teams

What I can deliver

  • Time aware evaluation protocol, rolling windows, leakage controls, reproducible reporting.
  • Cost model and threshold policy: net benefit curves, friction budget, capacity constraints.
  • Model benchmarking: strong baselines first, deep learning only when it wins under temporal backtests.
  • Governance pack: model cards, release gates, monitoring plan, rollback triggers, audit evidence.

Practical scope. Acquiring fraud, card present and e-commerce. Design for regulated environments.

Engagement modes

  • Advisory review of your current stack and evaluation practice. Findings, risks, concrete fixes.
  • Hands on build of the evaluation harness and governance templates, ready for production use.
  • Benchmark sprint: baseline suite, temporal CV, cost curves, recommendation with evidence.
  • Enablement: train analysts and engineers to run the pipeline and keep it honest under drift.

You keep ownership. I deliver the method, artefacts, and evidence trail.

Credibility signals

PhD at NOVA IMS Acquiring domain focus Time aware validation Cost sensitive evaluation Audit ready governance Reproducible artefacts

This site is the public face of the method. Private engagements remain private. Nothing sensitive is published here.

Research focus

Research questions

  • How can we reduce false positives while preserving or improving fraud capture under strict time aware validation?
  • How should we evaluate models when the real target is net benefit, not accuracy?
  • How can we detect and control drift in acquiring, where merchant and attacker behaviour evolves continuously?
  • How can governance, auditability, and human oversight be operationalised without reducing performance?

Scope. Card present and e-commerce acquiring. Merchant context. Behavioural signals. Cost aware decision policies.

Facts and figures

1 TB
Industry scale context
3
Target papers
Q1
Survey target
Strict
Time aware validation

Core outputs on this hub

Evaluation protocol Cost model note Model card template LLM attack simulation Operational explainability

Protocol and templates

Evaluation protocol

Fraud data is non stationary. Random splits are misleading. This work uses blocked time series validation and rolling windows to approximate production reality, quantify drift sensitivity, and prevent leakage.

PDF. Download

Cost model note

The primary KPI is expected net benefit. False positives create friction and operational cost. False negatives create direct loss. This note defines assumptions, constraints, and reporting requirements for threshold selection.

PDF. Download

Model card template

Audit ready documentation. Model overview, data windows, temporal results, drift monitoring, governance sign off, and rollback criteria.

PDF. Download

Governance and ethics

Governance principles

  • Data minimisation and explicit purpose limitation for anti fraud.
  • Documented lineage. Data version, features, training window, parameters.
  • Human oversight for high impact decisions with clear accountability.
  • Auditability by design. Logs, model cards, decision traceability.

Operational explainability

Explainability is treated as an operational instrument. The goal is to enable analysts, risk teams, and auditors to understand why a decision occurred and how stable that reason is under drift.

Policy. Privacy and data disclosure

Disclosure statement

No production secrets, sensitive rules, merchant identities, or personal data are published. Public materials use synthetic data or anonymised aggregates. Any public robustness demonstrations are designed to inform defence, not enable abuse.

Papers and outputs

WorkStatusFocusLinks
Survey Deep learning for acquiring fraud detection In progress SLR, metrics, temporal validation, robustness, governance Preprint coming soon
Methods Temporal CV and cost model Planned Protocol, cost curves, drift control, baselines Draft coming soon
Results Models vs baselines on acquiring context Planned Model families, operational trade offs, governance Draft coming soon
Fraud attack simulation Robustness harness Planned Attack scenarios, canary tests, evaluation harness GitHub link coming soon

If you want a fast view of my work, start with the protocol PDFs, then reach out by email with your constraints.

Contact

Paulo Saramago • Lisbon, Portugal

Academic email: psaramago@novaims.unl.pt

LinkedIn: linkedin.com/in/saramago

Alternative email: psaramago@gmail.com

For academic collaboration or applied work, include: acquiring context (card present or e-commerce), your current validation split strategy, main pain (false positives, false negatives, drift, governance), and any constraints (latency, ops capacity, audit requirements).

Policies

Privacy and data disclosure: Open

Last updated: