Civic and Responsible AI Lab (CRAIL)
civicandresponsibleai.com
Civic and Responsible AI Lab (CRAIL)
@civicandresponsibleai.com
A research lab working towards Responsible AI (and Robotics), and the use of AI for civil society and empowerment. Based at King's College London, UK.
Led by Martim Brandao (@martimbrandao.bsky.social).

Website: https://www.civicandresponsibleai.com/
Roundup of our robotics paper this year 1/n: "Harvesting perspectives" by Muhammad Malik investigates farm workers' working conditions, perceptions of farm robots, and worker-centered visions of farm robotics. #ROMAN2025 #HRI #robots #AI #ResponsibleAI
doi.org/10.1109/ro-m...
Harvesting Perspectives: A Worker-Centered Inquiry into the Future of Fruit-Picking Farm Robots
The integration of robotics in agriculture presents promising solutions to challenges such as labour shortages and increasing global food demand. However, existing visions of agriculture robots often ...
doi.org
December 18, 2025 at 12:12 PM
Reposted by Civic and Responsible AI Lab (CRAIL)
Robots powered by popular AI models are currently unsafe for general purpose real-world use.

Researchers from @kingsnmes.bsky.social & @cmu.edu evaluated how robots that use large language models (LLMs) behave when they have access to personal information.

www.kcl.ac.uk/news/robots-...
Robots powered by popular AI models risk encouraging discrimination and violence | King's College London
Robots powered by popular AI models are currently unsafe for real-world use.
www.kcl.ac.uk
November 11, 2025 at 3:38 PM
We'll be at #AIES2025 presenting Atmadeep's work on Postcolonial Ethics for Robots www.martimbrandao.com/papers/Ghosh... We:
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering
October 18, 2025 at 4:48 PM
Our paper on safety & discrimination of LLM-driven robots is out! doi.org/10.1007/s123...
We find LLMs are:
- Unsafe as decision-makers for HRI
- Discriminatory in facial expression, proxemics, security, rescue, task assignment...
- They don't protect against dangerous, violent, or unlawful uses
October 17, 2025 at 3:23 PM
Hello world! We are CRAIL. Our goal is to contribute to Responsible AI, and to use it for civil society and empowering marginalized groups.
Follow us to hear about risks and social impact of AI, critical examinations of AI fields, and new algorithms towards socially just and human-compatible tech.
October 17, 2025 at 10:49 AM