Stanford bioengineer Kwabena Boahen is on a quest to build computers that function like the brain, which could be the solution for the expense and environmental impact of AI’s high demand for computing power.
Federal Trade Commission Chair Lina Khan has a warning for the tech industry: Antitrust enforcers are watching what you do in the race to profit from artificial intelligence.
Stanford Institute for Human-Centered Artificial Intelligence —
Strategies to help students feel more engaged and valued are a better way to curb cheating than taking a hard line on AI, says Stanford education scholar Denise Pope.
Stanford education researchers are at the forefront of building natural language processing systems that will support teachers and improve instruction in the classroom.
With synchronous video from a pair of smartphones, engineers at Stanford have created an open-source motion-capture app that democratizes the once-exclusive science of human movement – at 1% of the cost.
An AI-powered database could help Brazilian authorities locate labor camps in the Amazon rainforest where hundreds of thousands of people are held in conditions of modern slavery.
Medical algorithms trained on adult data may be unreliable for evaluating young patients. But children’s records present complex quandaries for AI, especially around equity and consent.
A model trained on thousands of images in medical textbooks and journal articles found that dark skin tones are underrepresented in materials that teach doctors to recognize disease.
The platform formerly known as Twitter turns out to be a surprising source of high-quality medical knowledge, says biomedical data science expert James Zou.
An increasing number of people are turning to AI for help in sensitive areas like financial planning and medical advice, but researchers say large language models aren’t trustworthy enough for such critical jobs.
To effectively regulate artificial intelligence, lawmakers must first understand it. A Stanford HAI workshop helped staffers think critically about this emerging technology.
A generative search engine is supposed to respond to queries using content extracted from top web search hits, but there’s no easy way to know when it’s just making things up.
At least seven algorithms promise to expose AI-written prose, but there’s one problem: They’re especially unreliable when the author is not a native English speaker.