https://www.maxkagan.com/
vrollet.github.io/files/Ideolo...
vrollet.github.io/files/Ideolo...
www.andrewbenjaminhall.com/HallSun25.pdf
www.andrewbenjaminhall.com/HallSun25.pdf
For every uncorrected p value you must add an extra letter to the claim.
“Eating chocolate maaaaaaaaay be associated with lower rates of stroke”
For every uncorrected p value you must add an extra letter to the claim.
“Eating chocolate maaaaaaaaay be associated with lower rates of stroke”
🗓️ Sat, July 26 | 11:00AM–1:30PM
📍 Bella Center, Auditorium 12
📝 Pre-register by ***July 1***: umdsurvey.umd.edu/jfe/form/SV_...
🗓️ Sat, July 26 | 11:00AM–1:30PM
📍 Bella Center, Auditorium 12
📝 Pre-register by ***July 1***: umdsurvey.umd.edu/jfe/form/SV_...
Measuring and Modeling Neighborhoods By Cory McCartan, New York University, Jacob R. Brown, Boston University and Kosuke Imai, Harvard University Granular geographic data present new opportunities to understand how neighborhoods are formed, and how they…
Measuring and Modeling Neighborhoods By Cory McCartan, New York University, Jacob R. Brown, Boston University and Kosuke Imai, Harvard University Granular geographic data present new opportunities to understand how neighborhoods are formed, and how they…
PDF: fabriziogilardi.org/resources/pa...
PDF: fabriziogilardi.org/resources/pa...
🧵
🧵
Three scholars at Columbia, Michigan, & Maryland just introduced a measure of the partisan leanings of employers in the U.S.
The data is constructed by linking voter registrations to online worker profiles.
VRscores capture the political affiliations of 21.8M workers across 2.6M employers.
Three scholars at Columbia, Michigan, & Maryland just introduced a measure of the partisan leanings of employers in the U.S.
The data is constructed by linking voter registrations to online worker profiles.
VRscores capture the political affiliations of 21.8M workers across 2.6M employers.
polarizationresearchlab.org/hiring/
polarizationresearchlab.org/hiring/
When pretraining at 8B scale, SuperBPE models consistently outperform the BPE baseline on 30 downstream tasks (+8% MMLU), while also being 27% more efficient at inference time.🧵
When pretraining at 8B scale, SuperBPE models consistently outperform the BPE baseline on 30 downstream tasks (+8% MMLU), while also being 27% more efficient at inference time.🧵
Paper link and 🧵
tinyurl.com/jp2pk4rt
1/10
Paper link and 🧵
tinyurl.com/jp2pk4rt
1/10