An AI model that uses Google Street View to spot early signs of gentrification could one day help cities target anti-displacement policies more precisely.
Stanford Institute for Human-Centered Artificial Intelligence —
Amy Zegart on AI and spycraft
A profession that once hunted diligently for secrets is now picking through huge haystacks for needles of insight – precisely the kind of work at which AI excels.
You can’t regulate artificial intelligence without technical talent, says Stanford HAI’s Daniel Zhang. Tech, Ethics & Policy Fellows are helping shape the conversation.
Stanford education researchers are at the forefront of building natural language processing systems that will support teachers and improve instruction in the classroom.
“Generative agents” that draw on large language models to make breakfast, head to work, grab lunch, and ask other agents out on dates could change both gaming and social science.
This year’s cohort includes scholars from sociology, law, art, computer science, mechanical engineering, anthropology, psychology, ethics, ecology, and more.
Medical algorithms trained on adult data may be unreliable for evaluating young patients. But children’s records present complex quandaries for AI, especially around equity and consent.
A model trained on thousands of images in medical textbooks and journal articles found that dark skin tones are underrepresented in materials that teach doctors to recognize disease.
An increasing number of people are turning to AI for help in sensitive areas like financial planning and medical advice, but researchers say large language models aren’t trustworthy enough for such critical jobs.
To effectively regulate artificial intelligence, lawmakers must first understand it. A Stanford HAI workshop helped staffers think critically about this emerging technology.
A generative search engine is supposed to respond to queries using content extracted from top web search hits, but there’s no easy way to know when it’s just making things up.
Why GPT detectors aren’t a solution to the AI cheating problem
At least seven algorithms promise to expose AI-written prose, but there’s one problem: They’re especially unreliable when the author is not a native English speaker.