Breaking News

LSU Baseball – Live on the LSU Sports Radio Network The US House advanced a package of 95 billion Ukraine and Israel to vote on Saturday Will Israel’s Attack Deter Iran? The United States agrees to withdraw American troops from Niger Olympic organizers unveiled a strategy for using artificial intelligence in sports St. John’s Student athletes share sports day with students with special needs 2024 NHL Playoffs bracket: Stanley Cup Playoffs schedule, standings, games, TV channels, time The Stick-Wielding Beast of College Sports Awakens: Johns Hopkins Lacrosse Is Back Joe Pellegrino, a popular television sports presenter, has died at the age of 89 The highest-earning athletes in seven professional sports

Kapoor and Narayanan hosted a workshop late last month to draw attention to what they call a “reproducibility crisis” in the science that makes use of machine learning. They expected about 30 participants, but received applications from more than 1,500 people, a surprise that they say suggests that problems with machine learning in science are widespread.

During the event, guest speakers reported several examples of situations where AI was misused, in areas such as medicine and social sciences. Michael Roberts, a senior research associate at the University of Cambridge, discussed issues with dozens of articles claiming to use machine learning to fight Covid-19, including cases where data was skewed because it came from a variety of different imaging machines. Jessica Hullman, an associate professor at Northwestern University, compared problems with studies using machine learning to the phenomenon of big results in psychology that proved impossible to replicate. In either case, says Hullman, researchers are prone to using too little data and misinterpreting the statistical significance of the results.

Momin Malik, a data scientist at the Mayo Clinic, was invited to speak about his own work tracking problematic uses of machine learning in science. In addition to common mistakes in implementing the technique, he says, researchers sometimes apply machine learning when it’s the wrong tool for the job.

Malik points to a prominent example of machine learning producing misleading results: Google Flu Trends, a tool developed by the research firm in 2008 that aimed to use machine learning to identify flu outbreaks more quickly from logs of search queries entered by users. from the web. Google gained positive publicity for the project, but failed spectacularly to predict the course of the 2013 flu season. An independent study would later conclude that the model clung to seasonal terms that have nothing to do with flu prevalence. “You can’t just throw everything into a big machine learning model and see what comes out,” says Malik.

Some workshop participants say it may not be possible for all scientists to become masters of machine learning, especially given the complexity of some of the problems highlighted. Amy Winecoff, a data scientist at the Center for Information Technology Policy at Princeton, says that while it is important for scientists to learn good software engineering principles, master statistical techniques, and spend time maintaining datasets, it shouldn’t be. occur at the expense of domain knowledge. “We don’t want, for example, schizophrenia researchers to know a lot about software engineering,” she says, but little about the causes of the disorder. Winecoff suggests that more collaboration between scientists and computer scientists can help strike the right balance.

While the misuse of machine learning in science is an issue in itself, it can also be seen as an indicator that similar issues are likely to be common in corporate or government AI projects that are less open to external scrutiny.

Malik says he’s more concerned about the prospect of misapplied AI algorithms causing real-world consequences, such as unfairly denying someone health care or unfairly advising against parole. “The general lesson is that it’s not appropriate to approach everything with machine learning,” he says. “Despite the rhetoric, hype, successes and hopes, it is a limited approach.”

Kapoor of Princeton says it’s vital that scientific communities start thinking about the issue. “Machine learning-based science is still in its infancy,” he says. “But this is urgent – ​​it could have very harmful long-term consequences.”

Leave a Reply

Your email address will not be published. Required fields are marked *