Artificial intelligence

News articles classified as Artificial intelligence

Runners prefer the same pace, regardless of distance

By comparing the most energy-efficient running speeds of recreational runners in a lab to the preferred, real-world speeds measured by wearable trackers, Stanford scientists found that runners prefer a low-effort pace – even for short distances.

Stanford Institute for Human-Centered Artificial Intelligence —

How fast will Antarctica’s ice sheet melt?

Autonomous drones that collect data based on scientific machine learning models could play a pivotal role in reducing the uncertainty of sea-level rise.

AI deciphers atomic-scale images for better batteries

Using artificial intelligence to analyze vast amounts of data in atomic-scale images, Stanford researchers answered long-standing questions about an emerging type of rechargeable battery posing competition to lithium-ion chemistry.

Stanford HAI —

Do popular AI communication tools favor the privileged?

A study examined the gap between the availability and accessibility of AI-enabled communication tools such as predictive texting, and found that internet access, age and user speech characteristics were barriers to use.

Stanford Institute for Human-Centered Artificial Intelligence —

Can’t unsubscribe? Blame dark patterns

Jennifer King, a privacy and data policy fellow at HAI, explains the importance of tracking and regulating manipulative online tactics.

Stanford Engineering —

How to make artificial intelligence more meta

Chelsea Finn, an expert on AI and robotics, says that the latest trend in her field is teaching AI to look inward to improve itself in this episode of The Future of Everything.

Stanford Institute for Human-Centered Artificial Intelligence —

Closing language gaps to improve COVID-19 tracing

Scholars employ a machine learning algorithm to predict people’s language needs, helping contact tracers resolve cases faster.

Stanford Institute for Human-Centered Artificial Intelligence —

Stanford researchers build $400 self-navigating smart cane

Incorporating sensing and way-finding approaches from robotics and self-driving vehicles, the cane could reshape life for people with blindness or sight impairment.

AI system identifies buildings damaged by wildfire

A deep learning approach to classifying buildings with wildfire damage may help responders focus their recovery efforts and offer more immediate information to displaced residents.

AI algorithm solves structural biology challenges

Stanford researchers develop machine learning methods that accurately predict the 3D shapes of drug targets and other important biological molecules, even when only limited data is available.

Stanford Institute for Human-Centered Artificial Intelligence —

Hospital AI tools aren’t well documented

A new study reveals medical AI tools aren’t being documented with rigor or transparency, leaving users blind to potential errors such as flawed training data and calibration drift.

Stanford HAI —

How flawed data aggravates inequality in credit

AI offers new tools for calculating credit risk. But it can be tripped up by noisy data, leading to disadvantages for low-income and minority borrowers.

Stanford Graduate School of Business —

Facing the unsettling power of AI to analyze our photos

In a Q&A, Michal Kosinski, associate professor of organizational behavior, talks about exposing the dangers of new technologies and the controversies that come with it.

Understanding extreme weather

A new machine learning approach helps scientists understand why extreme precipitation days in the Midwest are becoming more frequent. It could also help scientists better predict how these and other extreme weather events will change in the future.

AI tool streamlines feedback on coding homework

Stanford professors develop and use an AI teaching tool that can provide feedback on students’ homework assignments in university-level coding courses, a previously laborious and time-consuming task.

Stanford Institute for Human-Centered Artificial Intelligence —

Rooting out anti-Muslim bias in popular language model GPT-3

Large language models are showing great promise, from writing code to convincing essays, and could begin to power more of our everyday tools. That could lead to serious consequences if their bias isn’t remedied, Stanford researchers say.

Stanford HAI —

Why AI struggles to recognize toxic speech on social media

A team of human-computer interaction and AI researchers at Stanford sheds new light on why automated speech police can score highly accurately on technical tests, yet provoke a lot of dissatisfaction from humans with their decisions.