05 Sci-Tech

News Source
EXCERPT:

For years, the way large language models handle inference has been stuck inside a box — literally. The high-bandwidth RDMA networks that make modern LLM serving work have confined both prefill and decode to the same datacenter, sometimes even the same rack. A team of researchers at Moonshot AI and Tsinghua University is making the case that this constraint is about to break down — and that the right architecture can already exploit that shift.

The research team introduces Prefill-as-a-Service (PrfaaS), a cross-datacenter serving architecture that selectively offloads long-context prefill to standalone, compute-dense prefill clusters and transfers the resulting KVCache over commodity Ethernet to local PD clusters for decode. The result, in a case study using an internal 1T-parameter hybrid model, is 54% higher serving throughput than a homogeneous PD baseline and 32% higher than a naive heterogeneous setup — while consuming only a fraction of available cross-datacenter bandwidth. The research team note that when compared at equal hardware cost, the throughput gain is approximately 15%, reflecting that the full 54% advantage comes partly from pairing higher-compute H200 GPUs for prefill with H20 GPUs for decode.

News Source
EXCERPT:

Human societies have not just adapted to the natural world. They have steadily learned how to transform it. Drawing on research from archaeology, ecology, anthropology, and evolutionary theory, Erle Ellis, professor of geography and environmental systems at the University of Maryland Baltimore County, explains how cultural practices have evolved to give humans extraordinary influence over the ecosystems that sustain them.

From early uses of fire to cook food and shape landscapes to modern systems like industrial agriculture, global trade, and rapidly growing cities, societies have developed powerful tools and institutions. These social and cultural advances have allowed humans to reshape the planet on a massive scale while improving their ability to survive and thrive.

News Source
EXCERPT:
The Perseus Cluster is a massive galaxy cluster located in the constellation Perseus. It is one of the largest structures in the observable universe, comprising more than a thousand galaxies—equivalent to roughly a thousand trillion times the mass of the sun. Hot gases within the cluster, known as the intracluster medium (ICM), emit powerful X-rays detectable by telescopes. These gases are produced by billions of supernova explosions, and their chemical composition reveals how typical supernovae have exploded throughout cosmic history.

News Source
EXCERPT:

AI tool Claude, developed by Anthropic, suddenly announced the rollout of a new identity verification system requiring users to complete a real-time selfie check while holding a government-issued ID.

The move has drawn global attention, but for Chinese users in particular, it feels like a heavy blow that erects a difficult-to-cross “wall” in AI access.

This verification is not being applied universally to all users at once. Instead, it is being introduced gradually in specific scenarios. When users attempt to access certain advanced features, or as part of routine platform integrity checks and other safety and compliance measures, a verification prompt may appear.

The process itself appears simple and typically takes no more than five minutes. However, users must prepare a government-issued photo ID—such as a passport, driver’s license, or national ID card—and use a camera-enabled device to capture a real-time selfie.

For Chinese users, the impact of this mechanism is both broad and profound. The barrier to entry has been significantly raised: individuals without passports are excluded from using Claude.

Even for those who do have passports, older accounts may become valuable assets, while new users face hurdles due to real-name verification requirements, making normal access increasingly difficult.

News Source
EXCERPT:

Ahead of a consequential vote on extending the government’s authority to conduct overseas espionage, several House conservatives are expressing their concerns.

On April 20, Section 702 of the Foreign Intelligence Surveillance Act, which enables the government to spy on foreigners, is set to expire.

Many House Republicans and President Donald Trump have argued in the past that this power is easily abused, resulting in the inadvertent surveillance of American citizens.

Back in 2024, facing a different deadline, Congress agreed to extend Section 702 for two years.

Several members of the House Republican Conference demanded reforms to the authority, some of which were ultimately granted.

News Source
EXCERPT:

Key Takeaways

  • A 16-year-old boy shot two people in downtown Seattle before being killed by a licensed concealed carrier.
  • The shooting occurred near the Four Seasons Hotel during a fight that escalated with gunfire.
  • Two victims, aged 18 and 17, were hospitalized in serious condition following the incident.
  • The armed citizen cooperated with police and was not arrested after the event.
  • This incident highlights the importance of lawful concealed carry in responding to active threats.

Estimated reading time: 2 minutes

SEATTLE, WA — A 16-year-old boy who shot two people in downtown Seattle Wednesday night was killed by a licensed concealed carrier who intervened at the scene. As reported by MSN, the initial shooting happened around 10 p.m. near the Four Seasons Hotel on Union Street.

Seattle Police Chief Shon Barnes said three people got into a fight when one pulled out a gun and shot two bystanders before fleeing the scene. A private citizen who was licensed to carry a firearm stepped in and shot the suspect.

Seattle Fire Department crews treated an 18-year-old man and a 17-year-old boy, both transported to the hospital in serious condition. The 16-year-old suspect was taken to Harborview Medical Center, where he died from his injuries.

News Source
EXCERPT:

As scientists confirmed that March was the United States’ most abnormally hot month in recorded history, dozens of climate deniers gathered to promote misinformation and tout their newfound influence on federal policy.

At a conference hosted by the prominent science-denying think tank the Heartland Institute last week, a crowd of mostly middle-aged men in suits claimed the world is finally waking up to the idea that the climate crisis does not exist. “I feel wonderful,” James Taylor, president of the Heartland Institute, said in an interview. “The truth is winning out.”

The clearest sign of the crowd’s rising power was the gathering’s keynote speaker: Lee Zeldin, the administrator of the Environmental Protection Agency (EPA), whom President Donald Trump is also reportedly considering for attorney general. “It is a day to celebrate vindication,” Zeldin said on Wednesday morning.

 

News Source
EXCERPT:

Rumours that Earth’s gravity suddenly disappeared for a few seconds have circulated widely on the Internet lately. This rumor is associated with 12 August 2026 and is based on an alleged link between Project Anchor and the temporary disappearance of Earth’s gravity. Although this might sound impressive, scientists have stated that this information is absolutely false. According to NASA and other relevant institutions, there is not a single scientific reason to believe in this conspiracy theory. It can be helpful to investigate the origin of this rumor and examine the nature of gravity.

What does the August 12 gravity theory actually claim

The statement was never made by a scientific organisation, nor was it backed up by any study in the field. The statement originated online, where creative content can easily attract the interest of netizens if it is bizarre or sensational enough. The case at hand had elements that sounded believable because of the inclusion of a supposed “leaked document” and an alleged “secret program” of NASA.As reported by The New York Post, there is also no record in history of any scientific venture named “Project Anchor.” There have been no documents authenticated for it, and no scientists or agencies have ever endorsed the idea. This is a classic example of viral misinformation in the age of the internet.

News Source
EXCERPT:

Training a modern large language model (LLM) is not a single step but a carefully orchestrated pipeline that transforms raw data into a reliable, aligned, and deployable intelligent system. At its core lies pretraining, the foundational phase where models learn general language patterns, reasoning structures, and world knowledge from massive text corpora. This is followed by supervised fine-tuning (SFT), where curated datasets shape the model’s behavior toward specific tasks and instructions. To make adaptation more efficient, techniques like LoRA (Low-Rank Adaptation) and QLoRA (Quantized LoRA) enable parameter-efficient fine-tuning without retraining the entire model.

Alignment layers such as RLHF (Reinforcement Learning from Human Feedback) further refine outputs to match human preferences, safety expectations, and usability standards. More recently, reasoning-focused optimizations like GRPO (Group Relative Policy Optimization) have emerged to enhance structured thinking and multi-step problem solving. Finally, all of this culminates in deployment, where models are optimized, scaled, and integrated into real-world systems. Together, these stages form the modern LLM training pipeline—an evolving, multi-layered process that determines not just what a model knows, but how it thinks, behaves, and delivers value in production environments.

News Source
EXCERPT:

The shift to A.I.-driven interfaces is transforming advertising from attention-grabbing to machine-readable participation. Unsplash+

For decades, advertising has quietly powered the modern internet. It funded the rise of search engines, social platforms, maps, email and media, making them accessible to billions of people around the world. Most users never paid directly for these services, and yet they benefited from one of the most open and expansive information ecosystems ever created. 

Now, that ecosystem is being reshaped. Over the past year, the rapid adoption of generative A.I. and the corresponding decline in traditional search traffic for many publishers have intensified questions about how the next phase of the internet will be funded. 

Artificial intelligence is rapidly becoming the new front door to information. Instead of typing queries into a search bar and sifting through links, users are turning to A.I. systems to deliver direct answers, recommendations and decisions. Platforms like OpenAI, Perplexity and Anthropic are redefining how information is accessed altogether. Meanwhile, incumbents like Google are integrating A.I.-generated overview answers directly into search results, signaling a structural shift in how users discover information. 

News Source
EXCERPT:

For decades, physicists have been trying to answer a fundamental question: can electrons move like a perfectly smooth, frictionless fluid governed by a universal quantum value? Detecting this unusual behavior has proven extremely challenging. In real materials, tiny imperfections such as atomic defects and impurities tend to disrupt these delicate quantum effects, making them nearly impossible to observe.

Now, researchers at the Department of Physics, Indian Institute of Science (IISc), working with collaborators from the National Institute for Materials Science in Japan, have finally identified this elusive quantum fluid in graphene. This material consists of a single layer of carbon atoms arranged in a flat sheet. Their findings, reported in Nature Physics, open a new path for studying quantum phenomena and position graphene as a powerful platform for exploring effects that were previously out of reach in laboratory settings.

“It is amazing that there is so much to do on just a single layer of graphene even after 20 years of discovery,” says Arindam Ghosh, Professor at the Department of Physics, IISc, and one of the corresponding authors of the study.

News Source
EXCERPT:

Nvidia is the undisputed king of AI chips. But thanks to the AI it helped build, the champ could soon face growing competition.

Modern AI runs on Nvidia designs, a dynamic that has propelled the company to a market cap of well over $4 trillion. Each new generation of Nvidia chip allows companies to train more powerful AI models using hundreds or thousands of processors networked together inside vast data centers. One reason for Nvidia’s success is that it provides software to help program each new generation of chip. That may soon not be such a differentiated skill.

A startup called Wafer is training AI models to do one of the most difficult and important jobs in AI—optimizing code so that it runs as efficiently as possible on a particular silicon chip.

Emilio Andere, cofounder and CEO of Wafer, says the company performs reinforcement learning on open source models to teach them to write kernel code, or software that interacts directly with hardware in an operating system. Andere says Wafer also adds “agentic harnesses” to existing coding models like Anthropic’s Claude and OpenAI’s GPT to soup up their ability to write code that runs directly on chips.