Breaking News

This is why the State Department is warning against traveling to Germany Sports Diplomacy The United States imposes sanctions on Chinese companies for aiding Russia’s war effort Sports gambling lawsuit lawyers explain the case against the state Choose your EA SPORTS Player of the Month LSU Baseball – Live on the LSU Sports Radio Network United States, Mexico withdraw 2027 women’s World Cup bid to focus on 2031 US and Mexico will curb illegal immigration, leaders say The US finds that five Israeli security units committed human rights violations before the start of the Gaza war What do protesting students at American universities want?

Computer scientists this year learned how to pass on perfect secrets, why transformers seem so good at everything, and how to improve decades-old algorithms (with a little help from AI).

Myriam Wares for Quanta Magazine

Introduction

As computer scientists tackle a wider range of problems, their work becomes increasingly interdisciplinary. This year, many of the most significant results in computer science also involved other scientists and mathematicians. Perhaps most practical were the cryptographic questions underlying Internet security, which tend to be complicated mathematical problems. One such problem – the product of two elliptic curves and their relationship to an abelian surface – brought down a promising new cryptographic scheme that was thought to be strong enough to withstand an attack by a quantum computer. And a different set of mathematical relationships, in the form of one-way functions, will tell cryptographers whether truly secure codes are possible.

Computer science, and quantum computing in particular, also overlaps a lot with physics. In one of the biggest developments in theoretical computing this year, researchers have published a proof of the NLTS conjecture, which suggests (among other things) that a phantom connection between particles known as quantum entanglement is not as delicate as physicists once imagined. On the same subject : The White House requires direct public access to all US-funded research papers by 2025. This has implications not only for understanding our physical world, but also for the many cryptographic possibilities that entanglement enables.

And artificial intelligence has always flirted with biology; in fact, the field inspires the human brain as perhaps the ultimate computer. While understanding how the brain works and creating brain-like AI has long been a dream of computer scientists and neuroscientists, a new type of neural network known as a transformer appears to process information in a brain-like manner. As we learn more about how both work, each tells us something about the other. Perhaps this is why transformers excel at problems as diverse as language processing and image classification. AI has gotten better helping us make AI better, with new “hypernets” helping researchers train neural networks faster and at lower cost. So now the field not only helps other scientists with their work, but also helps its researchers achieve their goals.

Kristina Armitage for Quanta Magazine

On the same subject :
Welcome back to Chain Reaction, where we unpack and explain the latest…

Introduction

To see also :
HodlX Guest Post  Submit Your Post Science is the key to the growth…

Entangled Answers

When it comes to quantum entanglement, the property that tightly binds even distant particles, physicists and computer scientists were at an impasse. All agreed that a fully entangled system would be impossible to fully describe. But physicists thought it might be easier to describe systems that were close to being completely entangled. To see also : Can Zoom Save the United States?. Computer scientists disagreed, saying that calculating them would be just as impossible, a belief formalized in the “non-trivial low-energy state” (NLTS) conjecture. In June, a group of computer scientists published proof of this. Physicists were surprised, as it means that entanglement is not necessarily as fragile as thought, and computer scientists were happy that they were one step closer to proving a central question called a quantum verifiable theorem, which requires the NLTS to be true.

This news follows results from late last year showing that it is possible to use quantum entanglement to achieve perfect secrecy in encrypted communications. And in October researchers successfully entangled three particles over great distances, bolstering the possibilities for quantum encryption.

Avalon for Nuovo Quanta magazine

Read also :
Photo by Angus Mordant/Bloomberg.In a stunning reversal of fortunes, rumors have been…

Introduction

Transforming How AI Understands

Over the past five years, transformers have been revolutionizing how AI processes information. Originally developed for language understanding and generation, Transformer processes all elements of its input data simultaneously, which gives it greater speed and accuracy compared to other language networks, thanks to its incremental understanding. This also makes it unusually versatile, and other AI researchers are putting it to work in their fields. They have also discovered that the same principles allow them to innovate tools for classifying images and processing multiple data simultaneously. See the article : Seven ways the war in Ukraine is changing global science. However, these benefits come at the cost of more training than the non-transformer models require. Researchers studying how transformers work learned in March that part of their power comes from their ability to give words more meaning than just memorizing patterns. Transformers are so adaptable, in fact, that neuroscientists have begun modeling human brain functions with transformer-based networks, suggesting a fundamental similarity between artificial intelligence and human intelligence.

Kristina Armitage for Quanta Magazine

Introduction

Breaking Down Cryptography

The security of online communications is based on the difficulty of various math problems: the harder a problem is to solve, the harder it must be for a hacker to crack. And because today’s cryptographic protocols would be an easy task for a quantum computer, researchers have sought new problems to overcome them. But in July, one of the most hopeful collapsed after an hour of computing on a laptop. “It’s a no-brainer,” said Christopher Peikert, a cryptographer at the University of Michigan.

The failure highlights the difficulty of finding the right questions. Researchers have shown that creating provably secure code—one that will never crash—is only possible if you prove that there are “one-way functions,” problems that are easy to do but hard to reverse. We still don’t know if they exist (a discovery that would help us tell which cryptographic universe we live in), but a pair of researchers found that the question is equivalent to another problem called Kolmogorov complexity, which involves analyzing strings. numbers: one-way functions and true cryptography are only possible if a certain version of Kolmogorov complexity is hard to compute.

Olivia Fields for Quanta Magazine

Introduction

Machines Help Train Machines 

In recent years, the pattern recognition skills of artificial neural networks have supercharged the field of AI. But before a network can start working, researchers must train it, fine-tuning potentially billions of parameters in a process that can take months and require huge amounts of data. Or they could get a machine to do it for them. With a new type of “hypernetwork”—a network that processes and discards other networks—they will soon be able to. Called GHN-2, the hypernetwork analyzes any network and provides a set of parameter values ​​that have been shown in a study to be generally at least as effective as traditionally trained networks. Even when it did not provide the best possible parameters, the GHN-2 suggestions provided a starting point that was still closer to the ideal, reducing the time and data needed to complete the training.

This summer, Quanta also explored another new approach to helping machines learn. Known as embodied AI, it allows algorithms to learn from responsive three-dimensional environments rather than static images or abstract data. Whether they are agents exploring simulated worlds or real robots, these systems learn differently, and often better, than those trained with traditional approaches.

Introduction

Improved Algorithms

This year, with the rise of more sophisticated neural networks, computers have made further strides as research tools. Such a tool was particularly suited to the problem of multiplying two-dimensional tables of numbers called matrices. There is a standard way to do it, but it gets cumbersome as the matrices get bigger, so researchers are always looking for a faster algorithm that uses fewer steps. In October, DeepMind researchers announced that their neural network had discovered faster algorithms for multiplying certain matrices. But experts cautioned that the breakthrough represented the arrival of a new tool to solve a problem, not an entirely new AI that solves those problems on its own. As such, a pair of researchers relied on new algorithms, using traditional tools and methods to improve them.

The researchers also published in March a faster algorithm for solving the maximum flow problem, one of computer science’s oldest questions. By combining past approaches in new ways, the team created an algorithm that can determine the maximum flow of material through a given network, according to Daniel Spielman of Yale University. “Honestly, I thought… algorithms that are good for this problem wouldn’t exist.”

Sasha Maslov for Quanta Magazine

Introduction

New Avenues for Sharing Information

Mark Braverman, a theoretical computer scientist at Princeton University, has spent more than a quarter of his life working on a new theory of interactive communication. His work allows researchers to quantify terms such as “information” and “knowledge”, not only leading to a greater theoretical understanding of interactions, but also creating new techniques that enable more efficient and accurate communication. For this achievement and others, the International Mathematical Union awarded Braverman in July the IMU Abacus Medal, one of the highest honors in theoretical computing.

Next article

Leave a Reply

Your email address will not be published. Required fields are marked *