Breaking News

The US economy is cooling down. Why experts say there’s no reason to worry yet US troops will leave Chad as another African country reassesses ties 2024 NFL Draft Grades, Day 2 Tracker: Analysis of Every Pick in the Second Round Darius Lawton, Sports Studies | News services | ECU NFL Draft 2024 live updates: Day 2 second- and third-round picks, trades, grades and Detroit news CBS Sports, Pluto TV Launch Champions League Soccer FAST Channel LSU Baseball – Live on the LSU Sports Radio Network The US House advanced a package of 95 billion Ukraine and Israel to vote on Saturday Will Israel’s Attack Deter Iran? The United States agrees to withdraw American troops from Niger

Exactly two weeks after Russia invaded Ukraine in February, Alexander Karp, the CEO of data analytics firm Palantir, gave his pitch to European executives. With war on the doorstep, Europeans should modernize their arsenals with the help of Silicon Valley, he argued in an open letter.

For Europe to remain “strong enough to defeat the threat of foreign occupation,” Karp wrote, countries must “reinforce the relationship between technology and state, between disruptive companies trying to break the grip of stuck contractors, and government departments accept funding from the federal government.”

Military respond to the call. NATO announced on June 30 that it will set up a $1 billion innovation fund that will invest in start-ups and early-stage venture capital funds developing “priority” technologies such as artificial intelligence, big data, Develop processing and automation.

Since the start of the war, Britain has embarked on a new AI strategy specifically for defense, and the Germans have earmarked nearly half a billion for research and artificial intelligence as part of a $100 billion financial injection into the military.

“War is a catalyst for change,” says Kenneth Payne, Head of Defense Research at King’s College London and author of Book I, Warbot: The Dawn of Artificially Intelligent Conflict.

The war in Ukraine has added even more urgency to the quest to bring more AI tools to the battlefield. Those with the most upside are startups like Palantir that hope to make money as the military struggles to update their arsenals with the latest technologies. But longstanding ethical concerns about using AI in warfare have become more pressing as the technology becomes more advanced, while the prospect of restrictions and regulations on its use seems as remote as ever.

The relationship between technology and the military hasn’t always been so friendly. In 2018, following protests and employee outrage, Google withdrew from the Pentagon’s Project Maven, an attempt to build image recognition systems to improve drone strikes. The episode sparked a heated debate about human rights and the morality of developing AI for autonomous weapons.

It also prompted top AI researchers such as Turing Prize winner Yoshua Bengio and Demis Hassabis, Shane Legg and Mustafa Suleyman, founders of leading AI lab DeepMind, to pledge not to work on lethal AI.

But four years later, Silicon Valley is closer to the world’s militaries than ever before. And it’s not just big companies either — startups are finally getting a glimpse, says Yll Bajraktari, who was previously executive director of the US National Security Commission on AI (NSCAI) and now works for the Special Competitive Studies Project, a group which is doing lobbying for greater adoption of AI in the US.

Why AI

Companies that sell military AI make extensive claims about what their technology can do. They say it can help with everything from the mundane to the deadly, from checking CVs to processing data from satellites or spotting patterns in data to help soldiers make quicker decisions on the battlefield. Read also : The Sudanese army will not participate in political talks, the leader said. Image recognition software can help identify targets. Autonomous drones can be used for surveillance or strikes by land, air or sea, or to help soldiers deliver supplies more securely than is possible by land.

These technologies are still in their infancy on the battlefield, and militaries go through a period of experimentation, Payne says, sometimes without much success. There are countless examples of AI companies’ tendency to make big promises about technologies that turn out not to be as advertised, and combat zones are perhaps among the most technically demanding areas in which AI can be deployed, given the scarcity of relevant training data are. This could cause autonomous systems to fail in “complex and unpredictable ways,” argued Arthur Holland Michel, an expert on drones and other surveillance technologies, in a paper for the United Nations Institute for Disarmament Research

Nevertheless, many military are pushing forward. In a vaguely worded press release in 2021, the British Army proudly announced that it had used AI for the first time in a military operation to provide information about the environment and terrain. The US is working with startups to develop autonomous military vehicles. In the future, swarms of hundreds or even thousands of autonomous drones being developed by the US and UK military could prove powerful and deadly weapons.

Many experts are concerned. Meredith Whittaker, senior adviser on AI at the Federal Trade Commission and faculty director at the AI ​​Now Institute, says this push is more about enriching tech companies than enhancing military operations.

In an article for Prospect magazine co-authored with Lucy Suchman, a sociology professor at Lancaster University, she argued that AI boosters are fueling Cold War rhetoric and attempting to create a narrative that portrays big tech as “critical national infrastructure.” positioned, too big and too big important to break up or regulate. They warn that military adoption of AI is being portrayed as more of an inevitability than what it really is: an active choice that involves ethical complexities and tradeoffs.

On the same subject :
The Tennessee Titans have made a clear claim in the ongoing grass-to-turf…

AI war chests

With the Maven controversy relegated to the past, voices calling for more AI in defense have been growing louder in recent years. Read also : Why the United States Needs NATO.

One of the loudest was Google’s former CEO Eric Schmidt, who chaired the NSCAI and called for the US to take a more aggressive approach to adopting military AI.

In a report last year outlining the steps the United States should take to be state-of-the-art in AI by 2025, the NSCAI asked the US military to spend $8 billion annually invest in these technologies or risk falling behind China.

The Chinese military likely spends at least $1.6 billion a year on AI, according to a report by the Georgetown Center for Security and Emerging Technologies, and the US is already making a significant push to achieve parity, says Lauren Kahn, one Research Fellow at the Council on Foreign Relations. The US Department of Defense requested $874 million for artificial intelligence in 2022, although that figure does not reflect the department’s overall investment in AI, according to a March 2022 report.

Not only the US military is convinced of the necessity. European countries that are more reluctant to adopt new technologies are also spending more money on AI, says Heiko Borchert, co-director of the Defense AI Observatory at the Helmut Schmidt University in Hamburg.

The French and British have identified AI as a key technology for defense, and the European Commission, the EU’s executive arm, has allocated $1 billion to develop new defense technologies.

Happy 75th Birthday to the United States Air Force
See the article :
The United States Air Force (USAF) turns seventy-five on Sunday. On September…

Good hoops, bad hoops

Building demand for AI is one thing. See the article : Business Intelligence – how New York’s jets and their corporate partners have evolved. Getting the military to accept it is quite another.

Many countries are pushing the AI ​​narrative, but they’re struggling to move from concept to deployment, says Arnaud Guérin, CEO of Preligens, a French startup that sells AI surveillance. That’s partly because the defense industry in most countries is still dominated by a group of large contractors who tend to have more experience with military hardware than AI software, he says.

That’s also because cumbersome military vetting processes move slowly compared to the breakneck speed we’re used to in AI development: military contracts can stretch for decades, but in the fast-paced startup cycle, companies only have about a year to get off the ground.

Startups and venture capitalists have expressed frustration at the slow pace of the process. The risk, argues Katherine Boyle, general partner of venture capital firm Andreessen Horowitz, is that frustrated talented engineers will go after jobs at Facebook and Google, and startups will go bankrupt while awaiting defense contracts.

“Some of these tires are absolutely critical, especially in this sector where safety concerns are very real,” says Mark Warner, who founded FacultyAI, a data analysis company working with the UK military. “But others aren’t … and in some ways have enshrined the position of incumbents.”

AI companies with military ambitions “need to stay in business for a long time,” says Ngor Luong, a research analyst who has studied AI investment trends at the Georgetown Center for Security and Emerging Technologies.

Militaries are in a bind, says Kahn: Go too fast and risk deploying dangerous and broken systems, or go too slow and miss out on technological advances. The US wants to go faster, and the Department of Defense has enlisted the help of Craig Martell, former AI chief at ride-hailing company Lyft.

In June 2022, Martell took over as head of the Pentagon’s new Chief Digital Artificial Intelligence Office, tasked with coordinating the US military’s AI efforts. Martell’s mission, he told Bloomberg, is to change the department’s culture and encourage the military’s use of AI despite “bureaucratic inertia.”

He could be pushing for an open door as AI companies are already starting to land lucrative military contracts. In February, Anduril, a five-year-old startup developing autonomous defense systems like advanced underwater drones, won a $1 billion defense contract with the US. In January, ScaleAI, a startup providing data labeling services for AI, won a $250 million contract with the US Department of Defense.

To see also :
& # xD; The conclusion of the entire Russian invasion of Ukraine…

Beware the hype

Despite the steady advance of AI onto the battlefield, the ethical concerns that sparked the protests surrounding Project Maven have not gone away.

Some effort has been made to address these concerns. Aware that there is a trust issue, the US Department of Defense has established “Responsible Artificial Intelligence” guidelines for AI developers, and it has its own ethical guidelines for AI use. NATO has an AI strategy that sets voluntary ethical guidelines for its member states.

All of these policies urge militaries to use AI in a lawful, responsible, reliable, and accountable manner, and seek to mitigate biases embedded in the algorithms.

One of their key concepts is that humans must always remain in control of AI systems. But as technology advances, that won’t really be possible, Payne says.

“The whole point of an autonomous [system] is to enable it to make a decision faster and more accurately than a human could do, and to a degree that a human couldn’t do,” he says. “You’re effectively paralyzing yourself when you say, ‘No, we’re going to sue every single decision from a lawyer.'”

But critics say stricter rules are needed. There is a global campaign called Stop Killer Robots that aims to ban deadly autonomous weapons like drone swarms. Activists, high-profile officials like UN chief António Guterres, and governments like New Zealand’s argue that autonomous weapons are deeply unethical because they could put machines in control of life-and-death decisions and disproportionately harm marginalized communities through algorithmic bias.

For example, swarms of thousands of autonomous drones could become weapons of mass destruction. Restricting these technologies will be an uphill battle as the idea of ​​a global ban faces opposition from big military spenders like the US, France and the UK.

Ultimately, the new era of military AI raises a number of difficult ethical questions to which we do not yet have answers.

One of those questions is how automated the armed forces should be in the first place, Payne says. On the one hand, AI systems could reduce casualties by making war more targeted, but on the other hand, you “effectively create a robotic mercenary force fighting on your behalf,” he says. “It distances your society from the consequences of violence.”

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition, and computer vision.

Is AI a profitable business?

Accenture research shows that AI has the potential to increase profitability rates by an average of 38 percent by 2035 and lead to a $14 trillion economic boost in 16 industries across 12 economies by 2035.

Is artificial intelligence profitable? By specifically examining the profitability of the industry, the study shows that AI offers unprecedented opportunities. In labor-intensive sectors like wholesale and retail, AI augments human labor and enables people to be more productive, resulting in nearly a 60 percent increase in profits.

Is artificial intelligence a good investment?

AI stocks can be excellent long-term investments. According to market research firm IDC, the global artificial intelligence industry is projected to grow to total sales of $554 billion by 2024. Almost every industry is being revolutionized by AI, automation and robotics.

Can I start my own AI company?

An AI startup needs the right team to have a successful launch, including programmers, software engineers, a data scientist, a marketing manager, and others. It’s like any business where initial success depends heavily on hiring the right people.

How AI and robotics will disrupt the defense industry?

AI and Robotics Will Transform Military Operations For example, several countries have developed “guarded access” technologies, such as long-range air defense systems and precision strike weapons, that deny US forces the ability to operate in a contested area.

How is AI used in the military? The US military has also experimented with building deep-learning AI into flight simulators, and the algorithms have shown they can rival the skills of experienced human pilots in grueling dogfights. The United States says AI pilots will only be used as “wingmen” for real people if they are willing to be used.

What is the effect of artificial intelligence in Defence sector?

AI-enabled military devices are able to efficiently process large amounts of data. In addition, such technologies have enhanced self-regulation, self-control, and self-activation due to their enhanced computational and decision-making capabilities.

How does artificial intelligence disrupt industries?

The reason AI is being adopted on such a large scale is because of its ability to bring intelligence to tasks that didn’t exist before. This, coupled with the technology’s ability to intelligently automate repetitive processes, makes it a highly disruptive force across multiple sectors.

How AI can be weaponized?

AI undoubtedly makes our lives easier; However, the same technology is quickly becoming weapons. AI weaponization means using AI to intentionally harm humans by integrating it into the systems and tools of national militaries.

How is AI used in weapons? AI-based warfare may seem like a video game, but according to Air Force Secretary Frank Kendall, the U.S. Air Force used AI for the first time last September to turn a target or targets into “a living operational kill chain.” Presumably, this means that AI was used to identify and kill human targets.

Can AI systems be destructive?

The AI ​​is programmed to do something devastating: autonomous weapons are artificial intelligence systems programmed to kill. In the hands of the wrong person, these weapons could easily result in mass casualties. Additionally, an AI arms race could unintentionally lead to an AI war, which also results in mass casualties.

How AI can be used in military?

Artificial intelligence could help improve the multi-layered capabilities of armed forces in dealing with a spectrum of undefined war situations or hostile environments. Artificial intelligence enables quick decision-making in a dynamic environment with a high information density as well as in situations with little information.

Does the U.S. have autonomous drones?

The US Department of Defense has used drones in almost every military operation since the 1950s to provide reconnaissance, surveillance, and intelligence to enemy forces.

Can a drone fly autonomously? In contrast to “remote-controlled” drones, autonomous drones can continue to fly and move independently. This includes avoiding obstacles, self-positioning (including hovering in place), and staying in a specific area or flying to a target location.

Are unmanned drones autonomous?

Autonomous Flight Autonomous drones and UAVs (unmanned aerial vehicles) are capable of making intelligent decisions without input from a pilot or operator by learning from and adapting to the environment and are not limited to any specific prescribed algorithm.

Are there any autonomous drones?

Thanks to artificial intelligence, drones can now fly autonomously at remarkably high speeds, avoiding unpredictable, complex obstacles using only their onboard sensors and calculations.

Leave a Reply

Your email address will not be published. Required fields are marked *