Reich explains his new role serving as senior advisor to the U.S. AI Safety Institute and how he’ll use his background as a philosopher to approach his work.
Mental health assessments that go beyond ‘How often do you feel blue?’
Stanford Medicine researchers are developing artificial intelligence tools that provide a more accurate picture of a person’s mental health and flag those who need help.
In just five years, the institute has made major strides engaging policy, industry, and civil society to ensure that AI is developed with humans at the center.
An AI model that uses Google Street View to spot early signs of gentrification could one day help cities target anti-displacement policies more precisely.
The latest version of ChatGPT passes the Turing test with flying colors and has a more agreeable disposition than most humans. How might our own behavior evolve as a result?
Machine learning algorithms have proven especially good at burrowing into data collected in the field and unearthing new details on not only how interventions work, but for whom.
Chatbot assistant benefits less experienced employees
The first large-scale study of a ChatGPT-like assistant in the workplace finds that it can benefit less experienced employees — and make customers happier.
“Generative agents” that draw on large language models to make breakfast, head to work, grab lunch, and ask other agents out on dates could change both gaming and social science.
New technologies aid the fight against human trafficking
An AI-powered database could help Brazilian authorities locate labor camps in the Amazon rainforest where hundreds of thousands of people are held in conditions of modern slavery.
Medical algorithms trained on adult data may be unreliable for evaluating young patients. But children’s records present complex quandaries for AI, especially around equity and consent.
A model trained on thousands of images in medical textbooks and journal articles found that dark skin tones are underrepresented in materials that teach doctors to recognize disease.
There’s a faster, cheaper way to train large language models
Large language models have been dominated by big tech companies because they require extensive, expensive pretraining. Enter Sophia, a new optimization method developed by Stanford computer scientists.