Breaking News

These are the 20 best travel destinations for summer 2024, according to Google Flight Searches 3 Google Maps updates to make summer travel easier SPACECENT is up the new war zone > United States Space Force > Article Display Tuberculosis — United States, 2023 | MMWR Thousands of US bridges are vulnerable to collapse from a single hit: NTSB Why don’t the Blazers or ROOT Sports offer standalone streaming? Up to 200,000 people estimated to travel to Vermont for total solar eclipse How fast will April’s total solar eclipse travel? The UN Security Council demands a ceasefire in Gaza during Ramadan Mexico in the emerging world order

Image credit: Thinkhubstudio/Getty

Join executives July 26-28 at Transform’s AI & Edge Week Hear key leaders discuss topics in AL/ML technology, conversational AI, IVA, NLP, Edge and more. Book your free ticket now!

An exciting opportunity presented by artificial intelligence (AI) is the potential to crack some of the most difficult and important problems facing the fields of science and engineering. AI and science complement each other very well, with the former looking for patterns in data and the latter dedicated to discovering the underlying principles that create those patterns.

As a result, AI and science must massively unleash the productivity of scientific research and the pace of engineering innovation. For example:

These big, deep technological innovations have the potential to change the world. However, to meet these goals, data scientists and machine learning engineers face some major challenges ahead to ensure that their models and infrastructure achieve the change they want to see.

Explainability

A key part of the scientific method is interpreting and explaining the work and results of an experiment. This is essential for other groups to repeat the experiment and verify the findings. See the article : In a time of global problems, it is becoming more difficult for researchers to collaborate across national borders – New Hampshire Bulletin. Furthermore, non-experts and members of the public can understand the nature and potential of the results. If an experiment cannot be easily interpreted or explained, there is likely to be a major problem in further testing a discovery and even in disseminating and commercializing it.

When it comes to AI models based on neural networks, we should also treat inferences as experiments. Although a model is technically generating an inference based on observed patterns, there can often be a degree of randomness and variance in the output in question. This means that understanding the inferences of a model requires the ability to understand the intermediate steps and the logic of a model.

This is a problem faced by many AI models that take advantage of neural networks, as many currently serve as “black boxes”; the steps between data input and data output are not labeled, and there is no ability to explain what has gravitated toward a “why.” particular inference. As you can imagine, this is a major problem when explaining the implications of an AI model.

In fact, this risks limiting the ability to understand what a model is doing for the data scientists who develop the models and the devops engineers responsible for deploying them across compute and storage infrastructure. This, in turn, creates a barrier to the scientific community being able to verify and review a finding.

But there is also a problem when it comes to attempts to grind, commercialize or apply the fruits of research beyond the laboratory. Researchers who want to get regulators or customers involved will have a hard time getting their idea accepted if they can’t clearly explain why and how they can justify their findings in the language of an unfamiliar language. And then there is the problem of ensuring that an innovation is safe for the public to use, especially when it comes to biological or medical innovations.

On the same subject :
© 2022 American Association for the Advancement of Science. All rights reserved.…

Reproducibility

Another fundamental principle of the scientific method is the ability to reproduce the findings of an experiment. The ability to reproduce an experiment allows scientists to verify that a result is not a falsification or a fluke, and that the purported explanation of a phenomenon is accurate. This provides a way to “replicate” the findings of an experiment, ensuring that the wider academic community and the public can trust the accuracy of an experiment.

However, AI has a major problem in this regard. Small adjustments to the code and structure of a model, small variations in the training data it is fed, or differences in the infrastructure it is deployed on can cause the model to produce significantly different results. On the same subject : Halo Infinite multiplayer reportedly causes players to use 1GB of data per game. This can make it difficult to trust the results of a model.

But the issue of reproducibility can also make scaling a model very difficult. If a model is inflexible in its code, infrastructure, or inputs, then it is very difficult to extend it outside the research environment in which it was created. This is a big problem in bringing innovations from the laboratory to industry and society at large.

Denial of science, overconfidence and persuasion
Read also :
If there’s one thing that the COVID-19 pandemic has brought into sharp…

Escaping the theoretical grip

The next issue is less existential: the embryonic nature of the field. This may interest you : SPbPU: Collaboration on high-tech projects discussed at SPIEF-2022 with a delegation from Iran. Papers are constantly being published on harnessing AI in science and engineering, but many of them are still very theoretical and not overly concerned with translating lab developments into practical, real-world use cases.

It’s an inevitable and important phase for most new technologies, but it’s indicative of the state of AI in science and engineering. AI today is on the way to making incredible discoveries, but most researchers still treat it as a tool to be used in a laboratory context, rather than creating transformative innovations for use beyond researchers’ desks.

Ultimately, this is a transitory issue, but a shift in mindset away from theoretical to operational and implementation concerns will be key to realizing AI’s potential in this area, and addressing the big challenges of explainability and reproducibility. Ultimately, AI promises to help us make great strides in science and engineering if we take seriously the challenge of scaling beyond the lab.

Rick Hao is the Senior Deep Tech Partner at Speedinvest.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is a place for experts, including data technicians, to share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider submitting an article of your own!

Read more from DataDecisionMakers

When you abandon facts, you get junk science
This may interest you :
In his April 12 letter, Jack DeBaun falsely stated that “Monte Heil……

Leave a Reply

Your email address will not be published. Required fields are marked *