Skip to main content

Stanford Institute for Human-Centered Artificial Intelligence

Stanford HAI —

Generative AI and the social divide

The growing threat of disinformation leads people not only to believe in falsehoods, says Nate Persily, but also to disbelieve in facts.

Read More
Stanford HAI —

Privacy in the AI era

It’s basically impossible to escape digital surveillance across most facets of life, says Jen King. Artificial intelligence may compound the risks.

Read More
Stanford HAI —

Getting granular about gentrification

An AI model that uses Google Street View to spot early signs of gentrification could one day help cities target anti-displacement policies more precisely.

Read More
Stanford HAI —

The opportunity gap in social sector AI

Nonprofits are eager to leverage AI tools for mission-related impact. A working paper explores the untapped potential.

Read More
Stanford HAI —

AI helps patients in crisis access timely care

A new model to identify and triage high-risk messages to an online mental health platform dramatically reduced response time for those in urgent need.

Read More
Stanford HAI —

Seven AI trends that will make headlines in 2024

From white-collar work shifts to large video models, here's what Stanford HAI faculty and fellows predict will make headlines in 2024.

Read More
Stanford Institute for Human-Centered Artificial Intelligence —

Amy Zegart on AI and spycraft

A profession that once hunted diligently for secrets is now picking through huge haystacks for needles of insight – precisely the kind of work at which AI excels.

Read More
Stanford HAI —

Training AI experts for public service

You can’t regulate artificial intelligence without technical talent, says Stanford HAI’s Daniel Zhang. Tech, Ethics & Policy Fellows are helping shape the conversation.

Read More
Stanford HAI —

New leaders join Stanford HAI

Three new faculty associate directors and a new deputy director will help shape the future of human-centered artificial intelligence.

Read More
Stanford HAI —

Using AI to help refugees succeed

Machine learning tools are helping countries place refugees where they’re most likely to find employment.

Read More
Stanford Institute for Human-Centered Artificial Intelligence —

Tuning our algorithmic amplifiers

The values built into social media algorithms are highly individualized. Could we reshape our feeds to benefit society?

Read More
Stanford HAI —

Tools for teachers

Stanford education researchers are at the forefront of building natural language processing systems that will support teachers and improve instruction in the classroom.

Read More
Stanford HAI —

“Generative agents” change the game

“Generative agents” that draw on large language models to make breakfast, head to work, grab lunch, and ask other agents out on dates could change both gaming and social science.

Read More
Stanford HAI —

Coding art

A new tool powered by a large language model makes it easier for generative artists to create and edit with precision.

Read More
Stanford HAI —

Meet the new HAI graduate and postdoc fellows

This year’s cohort includes scholars from sociology, law, art, computer science, mechanical engineering, anthropology, psychology, ethics, ecology, and more.

Read More
Stanford HAI —

The problem of pediatric data

Medical algorithms trained on adult data may be unreliable for evaluating young patients. But children’s records present complex quandaries for AI, especially around equity and consent.

Read More
Stanford HAI —

AI uncovers bias in dermatology training tools

A model trained on thousands of images in medical textbooks and journal articles found that dark skin tones are underrepresented in materials that teach doctors to recognize disease.

Read More
Stanford HAI —

Trust issues

An increasing number of people are turning to AI for help in sensitive areas like financial planning and medical advice, but researchers say large language models aren’t trustworthy enough for such critical jobs.

Read More
Stanford HAI —

Congressional staffers go to AI bootcamp

To effectively regulate artificial intelligence, lawmakers must first understand it. A Stanford HAI workshop helped staffers think critically about this emerging technology.

Read More
Stanford HAI —

Developing curriculum for an AI-powered future

Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.

Read More
Stanford HAI —

AI’s hidden racial variables

James Zou on how AI that predicts patients' race based on medical images could improve or exacerbate health care disparities.

Read More
Stanford HAI —

Why ethics teams can’t fix tech

New research suggests that tech industry ethics teams lack resources and authority, making their effectiveness spotty at best.

Read More
Stanford HAI —

What the European Union AI Act means for the U.S.

Experts explored the finer points of the regulation poised to become the first comprehensive legal framework for artificial intelligence.

Read More
Stanford HAI —

ChatGPT outscores med students on clinical exam questions

Will AI’s ability to analyze medical text and offer diagnoses force us to rethink how we educate doctors?

Read More
Stanford HAI —

The next generation of AI scholars

A pilot project invites a cross-disciplinary group of students to explore fresh approaches to human-centered artificial intelligence.

Read More
Stanford HAI —

AI’s moonshot moment

Stanford HAI leaders urged investment and leadership to unlock AI’s potential during a recent meeting with President Biden.

Read More
Stanford HAI —

A blueprint for using AI in psychotherapy

A working paper proposes a three-stage process, similar to autonomous vehicle development, for responsibly integrating AI into psychotherapy.

Read More
Stanford HAI —

New tool reveals language models’ political bias

A new tool finds that popular large language models have a decided bias on hot-button topics that may be out of step with popular opinion.

Read More
Stanford HAI —

Can we trust generative search engines?

A generative search engine is supposed to respond to queries using content extracted from top web search hits, but there’s no easy way to know when it’s just making things up.

Read More
Stanford HAI —

Why GPT detectors aren’t a solution to the AI cheating problem

At least seven algorithms promise to expose AI-written prose, but there’s one problem: They’re especially unreliable when the author is not a native English speaker.

Read More