A novel jury learning system lets content moderators explicitly choose which people to listen to when training machine learning systems to recognize toxic speech.
Stanford Institute for Human-Centered Artificial Intelligence —
By comparing the most energy-efficient running speeds of recreational runners in a lab to the preferred, real-world speeds measured by wearable trackers, Stanford scientists found that runners prefer a low-effort pace – even for short distances.
Watch a discussion of the promise and pitfalls of using AI to bring life-saving drugs to market, including a look at justice and equity in drug research and access.
Autonomous drones that collect data based on scientific machine learning models could play a pivotal role in reducing the uncertainty of sea-level rise.
Using artificial intelligence to analyze vast amounts of data in atomic-scale images, Stanford researchers answered long-standing questions about an emerging type of rechargeable battery posing competition to lithium-ion chemistry.
A study examined the gap between the availability and accessibility of AI-enabled communication tools such as predictive texting, and found that internet access, age and user speech characteristics were barriers to use.
Stanford Institute for Human-Centered Artificial Intelligence —
Chelsea Finn, an expert on AI and robotics, says that the latest trend in her field is teaching AI to look inward to improve itself in this episode of The Future of Everything.
Working at the intersection of hardware and software engineering, researchers are developing new techniques for improving 3D displays for virtual and augmented reality technologies.
Incorporating sensing and way-finding approaches from robotics and self-driving vehicles, the cane could reshape life for people with blindness or sight impairment.
Stanford Institute for Human-Centered Artificial Intelligence —
To study how having virtual bodies affects the evolution of AI, researchers created a computer-simulated playground where “unimals” learn and are subjected to mutations and natural selection.
A deep learning approach to classifying buildings with wildfire damage may help responders focus their recovery efforts and offer more immediate information to displaced residents.
Stanford researchers develop machine learning methods that accurately predict the 3D shapes of drug targets and other important biological molecules, even when only limited data is available.
Stanford Institute for Human-Centered Artificial Intelligence —
A new study reveals medical AI tools aren’t being documented with rigor or transparency, leaving users blind to potential errors such as flawed training data and calibration drift.
AI offers new tools for calculating credit risk. But it can be tripped up by noisy data, leading to disadvantages for low-income and minority borrowers.
In a Q&A, Michal Kosinski, associate professor of organizational behavior, talks about exposing the dangers of new technologies and the controversies that come with it.
A new machine learning approach helps scientists understand why extreme precipitation days in the Midwest are becoming more frequent. It could also help scientists better predict how these and other extreme weather events will change in the future.
Stanford Institute for Human-Centered Artificial Intelligence —
Experts from psychology, neuroscience and AI are using state-of-the-art computational tools to disentangle the relationship between perception and memory within the human brain.
Stanford professors develop and use an AI teaching tool that can provide feedback on students’ homework assignments in university-level coding courses, a previously laborious and time-consuming task.
Stanford Institute for Human-Centered Artificial Intelligence —
Large language models are showing great promise, from writing code to convincing essays, and could begin to power more of our everyday tools. That could lead to serious consequences if their bias isn’t remedied, Stanford researchers say.
A team of human-computer interaction and AI researchers at Stanford sheds new light on why automated speech police can score highly accurately on technical tests, yet provoke a lot of dissatisfaction from humans with their decisions.