AI Watch

Source Link
Excerpt:

Extreme cosmic events such as colliding black holes or the explosions of stars can cause ripples in spacetime, so-called gravitational waves. Their discovery opened a new window into the universe. To observe them, ultra-precise detectors are required. Designing them remains a major scientific challenge for humans.

Researchers at the Max Planck Institute for the Science of Light (MPL) have been working on how an artificial intelligence system could explore an unimaginably vast space of possible designs to find entirely new solutions. The results were recently published in the journal Physical Review X.

More than a century ago, Einstein theoretically predicted gravitational waves. They could only be directly detected in 2016 because the development of the necessary detectors was extremely complex. Dr. Mario Krenn, head of the research group ›Artificial Scientist Lab‹ at MPL, in collaboration with the team of LIGO (“Laser Interferometer Gravitational-Wave Observatory”), who built those detectors successfully, has designed an AI-based algorithm called ›Urania‹ to design novel interferometric gravitational wave detectors. Interferometry describes a measurement method which uses the interference of waves, i.e. their superposition when they meet. Detector design requires optimizing both layout and parameters. The scientists have converted this challenge into a continuous optimization problem and solved it using methods inspired by modern machine learning. They have found many new experimental designs which outperform the best known next-generation detectors. These results have the potential to improve the range of detectable signals by more than an order of magnitude.

Source Link
Excerpt:

Eased restrictions around ChatGPT image generation can make it easy to create political deepfakes, according to a report from the CBC (Canadian Broadcasting Corporation).

The CBC discovered that not only was it easy to work around ChatGPT’s policies of depicting public figures, it even recommended ways to jailbreak its own image generation rules. Mashable was able to recreate this approach by uploading images of Elon Musk and convicted sex offender Jeffrey Epstein, and then describing them as fictional characters in various situations (“at a dark smoky club” “on a beach drinking piña coladas”).

Political deepfakes are nothing new. But widespread availability of generative AI models that can create images, video, audio, and text to replicate people has real consequences. For commercially-marketed tools like ChatGPT to allow the potential spread of political disinformation raises questions about OpenAI’s responsibility in the space. That duty to safety could become compromised as AI companies compete for user adoption.

Source Link Excerpt:

 

When Anthropic CEO Dario Amodei declared that AI would write 90% of code within six months, the coding world braced for mass extinction. But inside Salesforce, a different reality has already taken shape.

“About 20% of all APEX code written in the last 30 days came from Agentforce,” Jayesh Govindarajan, Senior Vice President of Salesforce AI, told me during a recent interview. His team tracks not just code generated, but code actually deployed into production. The numbers reveal an acceleration that’s impossible to ignore: 35,000 active monthly users, 10 million lines of accepted code, and internal tools saving 30,000 developer hours every month.

Yet Salesforce’s developers aren’t disappearing. They’re evolving.

U.S. appeals court rules AI-generated works ineligible for copyright protection – Capture
Source Link
Excerpt:

The case revolved around Stephen Thaler, a computer scientist who sought copyright protection for an artwork titled A Recent Entrance to Paradise, which was generated by his AI system, the “Creativity Machine.” Thaler applied for copyright registration in 2018, naming the AI as the creator and himself as the owner.

However, the U.S. Copyright Office rejected his request, citing the requirement for human authorship in copyright law. Thaler challenged the ruling, but both the U.S. District Court and the Court of Appeals upheld the Copyright Office’s decision.

In its ruling, the appellate court reaffirmed that U.S. copyright law mandates human authorship. The court highlighted that multiple provisions of the Copyright Act presuppose a human creator, further solidifying the requirement that only works with human involvement can qualify for copyright registration.

Google Tests An AI-Only Version Of Its Search Engine– www.ndtv.com
Source Link
Excerpt:

Alphabet’s Google launched an experimental version of its search engine on Wednesday that completely eliminates its classic 10 blue links in favor of an AI-generated summary.

The new feature, available to subscribers of Google One AI Premium, can be accessed via the results page for any search query by clicking on a tab labeled “AI Mode” to the side of existing options like Images and Maps.

“We’ve heard from power users that they want AI responses for even more of their searches,” Robby Stein, a vice president of product, said in a blog post.

Google One AI Premium is a $19.99 per month plan that provides extra cloud storage and special access to some AI features.

Google currently displays AI Overviews, summaries that are increasingly appearing atop the traditional hyperlinks to relevant webpages, for users in more than 100 countries. It began adding advertisements to AI Overviews last May.

US firm unveils plan for 100,000-strong humanoid robot army to counter China – Interesting Engineering
Source Link
Excerpt:

Figure AI announced that it had signed its second major commercial partner, bringing the dream of humanoid robots from labs into everyday life closer than ever. CEO Brett Adcock said the deal might make it possible to ship 100,000 humanoid robots over the next four years.

While details of the new customer remain under wraps, Adcock claims it is “one of the biggest U.S. companies,” spurring immediate speculation that it could be a large retailer or technology enterprise with significant labor needs.

Adcock’s remarks came in an update on LinkedIn, where he emphasized the importance of forging deep relationships with high-capacity customers rather than spreading the company’s efforts thin across many smaller clients. “Our newest customer is one of the biggest U.S. companies,” Adcock said.

‘AI to outsmart humans?’: Scientists warn of risk as Artificial Intelligence can now clone itself– timesofindia.indiatimes.com
Source Link
Excerpt:

Scientists warn that Artificial Intelligence (AI) has crossed a critical “red line” as researchers in China revealed that two leading large language models (LLMs) can replicate themselves, raising concerns about safety and ethical boundaries.
“Successful self-replication under no human assistance is the essential step for AI to outsmart (humans), and is an early signal for rogue AIs,” the researchers stated in their study, published on December 9, 2024, in the preprint database arXiv.