Golub Capital Social Impact Lab
@gsbsilab.bsky.social
35 followers 16 following 50 posts
Led by @susanathey.bsky.social, the Golub Capital Social Impact Lab at the Stanford University Graduate School of Business uses digital technology and social science to improve the effectiveness of social sector organizations.
Posts Media Videos Starter Packs
gsbsilab.bsky.social
New Paper Alert! Read the thread below for key takeaways from “Does Q&A Boost Engagement? Health Messaging Experiments in the United States and Ghana” by @erikakirgios.bsky.social @susanathey.bsky.social @angeladuckworth.bsky.social et al.
erikakirgios.bsky.social
What happens if you *ask* instead of *tell*? Turns out, teasing people with a question before sharing a fact can shape whether people engage with critical health information. Read our new paper in Management Science to learn more: pubsonline.informs.org/doi/full/10....
Does Q&A Boost Engagement? Health Messaging Experiments in the United States and Ghana | Management Science
pubsonline.informs.org
gsbsilab.bsky.social
Earlier this month @susanathey.bsky.social joined Stanford University President Levin, @siepr.bsky.social Director @nealemahoney.bsky.social and other distinguished faculty to connect over cutting-edge research developments at Stanford Open Minds New York.
gsbsilab.bsky.social
Listen to Raj Chetty talk about how surrogate indices make it possible to make decisions more quickly using multiple short-term outcomes to predict long-term effects with @nber.org

www.nber.org/research/vid...
gsbsilab.bsky.social
Check out @susanathey.bsky.social’s interview on how AI-powered after-the-fact quality checks boost driver performance at Uber—and what it means when AI can track compliance. Insights via @StanfordGSB.

www.gsb.stanford.edu/insights/how...
How Uber Steers Its Drivers Toward Better Performance
www.gsb.stanford.edu
gsbsilab.bsky.social
“Governments will play a key role…in whether we actually develop the technology that will help lower-skilled workers become more productive by using AI to augment them with expertise that previously was difficult to acquire.”
gsbsilab.bsky.social
The talk will cover:

✔️How AI is altering industry dynamics & structures

✔️How these shifts will impact public services such as health and education

✔️How AI market concentration could tax the global economy

✔️Why govt policy will be crucial in shaping AI competition and innovation
gsbsilab.bsky.social
AI & digitisation are rapidly reshaping the way we work.

Policymakers need to understand how, and what to do about it.

Watch @Susan_Athey speak to G20 leaders about these issues tomorrow 16 July @ 13:30 CET. #G20SouthAfrica

bit.ly/3GyMFgm or bit.ly/44PTXFP
gsbsilab.bsky.social
Beyond predictions, @keyonv.bsky.social also worked with @gsbsilab.bsky.social to show how these models can be used to make better estimations of important problems, such as the gender wage gap among men and women with the same career histories. Learn more here: bsky.app/profile/gsbs...
gsbsilab.bsky.social
Foundation models make great predictions. How should we use them for estimation problems in social science?

New PNAS paper @susanathey.bsky.social & @keyonv.bsky.social & @Blei Lab:
Bad news: Good predictions ≠ good estimates.
Good news: Good estimates possible by fine-tuning models differently 🧵
gsbsilab.bsky.social
If we know someone’s career history, how well can we predict which jobs they’ll have next? Read our profile of @keyonv.bsky.social to learn how ML models can be used to predict workers’ career trajectories & better understand labor markets.

medium.com/@gsb_silab/k...
Keyon Vafa: Predicting Workers’ Career Trajectories to Better Understand Labor Markets
If we know someone’s career history, how well can we predict which job they’ll have next?
medium.com
gsbsilab.bsky.social
Analyzing representations tells us where history explains the gap.

Ex: there are two kinds of managers: those who used to be engineers and those who didn’t. The first group gets paid more and has more males than the second.

Models that don’t use history omit this distinction.
gsbsilab.bsky.social
We use these methods to estimate wage gaps adjusted for full job history, following the literature on gender wage gaps.

History explains a substantial fraction of the remaining wage gap when compared to simpler methods. But there’s still a lot that history can’t account for.
gsbsilab.bsky.social
This result motivates new fine-tuning strategies.

We consider 3 strategies similar to methods from the causal estimation literature. E.g. optimize representations to predict the *difference* in male-female wages instead of individual wages.

All perform well on synthetic data.
gsbsilab.bsky.social
Two extremes:

A representation that's just the identity function meets condition (1) trivially but not (2).

A representation that uses a very simple summary of history (e.g. # of years worked) should meet (2) but fails (1)
gsbsilab.bsky.social
New result: Fast + consistent estimates are possible even if a representation drops info

Two main fine-tuning conditions:
1. Representation only drops info that isn't correlated w/ both wage & gender
2. Representation is simple enough that it’s easy to model wage & gender from it
gsbsilab.bsky.social
Intuition: If working in job X at some point has a small effect on wages, but men are much likelier to have worked in job X than women, it may be omitted by a model optimized to predict wage.
gsbsilab.bsky.social
Foundation models are usually fine-tuned to make predictions (like wages).

But representations fine-tuned this way can induce omitted variable bias: the gap adjusted for full history can be different from the gap adjusted for the representation of job history.
gsbsilab.bsky.social
We use CAREER, a foundation model of job histories. It’s pretrained on resumes but its representations can be fine-tuned on the smaller datasets used to estimate wage gaps.
gsbsilab.bsky.social
But this discards information that’s relevant to the wage gap.

In contrast, foundation models learn *representations*: lower-dimensional variables that summarize information.
gsbsilab.bsky.social
Consider estimating the wage gap explained by differences in job history.

Job history is high-dimensional since there are many possible sequences of jobs. So most economic models describe histories using hand-selected summary stats (e.g., # of years worked).
gsbsilab.bsky.social
Decompositions can inform policy: a large explained gender wage gap can suggest differences in choices or opportunities earlier in a worker’s career, while an unexplained gap may arise due to differences in factors such as skill, care responsibilities, or bargaining.