Diego Vallarino
banner
diegovall.bsky.social
Diego Vallarino
@diegovall.bsky.social
Board member @ IDB and IDB Invest 🇺🇸 | Lived in 🇺🇾 🇨🇱 🇪🇸 🇫🇷 🇺🇸 | Immigrant | Ex-Executive at Coface, Scotiabank & Equifax | PhD, MSc, MBA | EB1A🇺🇸 | dyslexic | Author: “Survival Model for Economics” @ Amazon www.diegovallarino.com
(5/5)
The takeaway: visibility is redistribution.
Credit data, when portable, interoperable, and fair, becomes an inclusion engine.
Policies should move beyond “open data” to data equity — aligning efficiency with justice.

#FinancialInclusion #DataEconomics #AI #Uruguay
November 12, 2025 at 1:00 PM
(4/5)
Conceptually, we treat data as a non-rival public asset:
its reuse doesn’t deplete value — it multiplies it.
Like infrastructure, data can be a redistributive lever when governed ethically and shared equitably.

This reframes inclusion as an architectural problem, not a fiscal one.
November 12, 2025 at 1:00 PM
(3/5)
The results:
- Average interest burden fell from 11.8% → 9.8% under Score+.
- Gini of financial burden dropped from 0.319 → 0.276.
- Poverty declined by nearly 1 percentage point.
These shifts occurred solely through improved data inclusion.
November 12, 2025 at 1:00 PM
(2/5)
Using microdata from Uruguay’s 2021 Household Survey, we simulate three regimes:
• Negative-only data (status quo)
• Partial positive data (Score+)
• Full synthetic visibility (Open Finance)

Expanding visibility alone reduced poverty and interest burden — no transfers, no subsidies.
November 12, 2025 at 1:00 PM
I appreciate any feedback, comments, or thoughts on this work.
If you find it relevant, feel free to share it so the discussion on fair and transparent AI in finance can reach a wider audience.
Thank you all for the support and engagement. 🙏 #Econsky
November 11, 2025 at 1:28 AM
(4/4)
This work invites both academics and practitioners to rethink AI governance.
Moving beyond black-box models, it builds systems that not only predict—but also explain why.
I’d love to hear your views on how Causal AI can advance fairness and accountability in financial decision-making.
November 11, 2025 at 1:26 AM
(3/4)
Results show that Causal-GNNs can reduce algorithmic bias without compromising predictive accuracy.
Validated on real datasets in fraud detection, credit scoring, and AML, the framework demonstrates how explainable AI can enhance trust and compliance in finance.
November 11, 2025 at 1:26 AM
(2/4)
The model integrates a Structural Causal Model (SCM) with a Graph Neural Network (GNN) to separate causality from correlation.
It provides a transparent foundation for ethical AI, improving fairness, interpretability, and regulatory alignment (GDPR, ECOA, Fair Lending).
November 11, 2025 at 1:26 AM
2/4 The proposed model integrates a Structural Causal Model (SCM) with a GNN architecture to disentangle causality from correlation — improving interpretability, fairness, and regulatory compliance (GDPR, ECOA, Fair Lending Laws).
November 11, 2025 at 12:25 AM
4/4
Beyond prediction, this framework offers a policy tool: it helps governments identify unrelated but viable diversification opportunities.
It bridges AI and economic complexity — shifting industrial policy from “what we export” to “what we could sustainably build next.”
#EconAI #TradeComplexity
November 9, 2025 at 1:46 PM
3/4
Results: the GNN achieves R² = 0.71, far outperforming traditional methods.
Simulated shocks reveal new diversification paths for Uruguay — in biotech, renewables, precision agriculture, and hydrogen technologies — sectors not central today but structurally feasible tomorrow.
November 9, 2025 at 1:46 PM
2/4
We combine real BACI-CEPII trade data with synthetic shock scenarios (tariffs, demand, exchange rates) generated via GANs to build hybrid trade networks.
The GNN learns which products can increase a country’s Economic Complexity Index (ECI) — even under global disruption.
November 9, 2025 at 1:46 PM
"Un profesor titular en la Universidad Complutense de Madrid gana unos 35.000 euros al año. En la Universidad de Michigan, un profesor promedio gana 207.000 dólares (unos 195.000 euros). Es decir, que, en cuatro años, un académico en Michigan cobra lo que uno en España recibiría en dos décadas."
May 8, 2025 at 12:03 PM
5/
Beyond the math:
This paper argues that financial data is not neutral—it carries history, exclusion, and power.
Fair AI demands we question not just algorithms, but how we collect and use data.
It’s data anthropology meets causal inference.
#FinancialJustice #CausalThinking
April 22, 2025 at 12:24 PM
4/
Causal GNNs outperform:
🔹 Standard GNNs
🔹 Fairness-aware ML
🔹 Post hoc counterfactual models
Why? Because fairness must be built in, not added later.
It’s time to rethink AI governance from the ground up.
#RegTech #ExplainableAI
April 22, 2025 at 12:24 PM
3/
Results?
⚖️ 74% reduction in demographic bias
📊 75% improvement in equal opportunity
🧠 65% fewer counterfactual fairness violations
All while keeping strong predictive performance (F1 = 0.79, AUC = 0.88).
Fairness ≠ trade-off anymore.
#ResponsibleAI #AIRegulation
April 22, 2025 at 12:24 PM