In an annual report, the WTO identified AI as one of the few bright spots as the global trading system has been upended by the United States slapping high tariffs on its trading partners
By AFP
September 17, 2025

Artificial intelligence could boost the value of global trade by almost 40 percent by 2040 thanks to cost reductions and productivity gains, the World Trade Organization said Wednesday.
In its latest annual World Trade Report, the WTO identified AI as one of the few bright spots as the global trading system has been upended by the United States slapping high tariffs on its trading partners.
“AI holds major promise to boost trade by lowering trade costs and reshaping the production of goods and services,” WTO chief Ngozi Okonjo-Iweala said while presenting the report.
She said WTO simulations suggest AI could increase exports of goods and services by nearly 40 percent above current trends.
However, much like the technology threatens to disrupt labour markets, a lack of proper policies could see lower income countries miss out on the opportunities.
“One important question is whether AI will lift opportunities for all, or whether it will deepen existing inequalities and exclusion,” Okonjo-Iweala said.
If lower-income economies fail to bridge the digital divide, WTO economists calculate they would see only an eight percent gain in incomes by 2040, far below the 14 percent gain in higher-income economies.
However, if they narrow the digital infrastructure gap by 50 percent and adopt AI more widely they could match the gains in higher-income countries.
“With the right mix of trade, investment and complementary policies, AI can create new growth opportunities in all economies,” Okonjo-Iweala said.
At the same time, the WTO found that countries are applying more restrictions on the trade of AI-related goods.
Nearly 500 restrictions were in place last year, mostly by higher- and medium-income economies. That compares to 130 restrictions in 2012.
By AFP
September 16, 2025

Google-owned YouTube has become the world's most popular free online video sharing platform since it was founded in California in 2005 and predicts artificial intelligence will help shape its future
YouTube on Tuesday boosted artificial intelligence tools for creators, saying it has paid out more than $100 billion to content-makers in the past four years.
YouTube chief executive Neal Mohan touted AI as an “evolution” aimed at empowering creativity and storytelling at the video-sharing service founded in early 2005 by former PayPal employees Chad Hurley, Jawed Karim, and Steve Chen.
YouTube has become the world’s most popular free online video service with billions of users since it was bought by Google in 2006.
“New AI-powered products will shape our next 20 years,” Mohan said at an event in New York City.
But Mohan insisted that “these are tools, nothing more,” and would not supersede the role of creators.
They “are designed to foster human creativity,” he said.
In one example, Veo video generation AI from Google DeepMind labs is being integrated into YouTube, enabling capabilities such as easily creating backgrounds in “Shorts” posted to a feed that competes with TikTok and Instagram Reels.
“New capabilities powered by Veo allow you to apply motion, restyle videos, and add props to your scenes,” YouTube chief product officer Johanna Voolich said in a blog post.
AI will also let creators turn raw footage into draft video content or convert dialogue into a song for soundtracks, Voolich added.
New AI tools will also let creators combine a photo with a video, essentially making it seem as though the person pictured is the one in action.
Podcasts are also a focus, with new tools letting producers use AI to create video versions of what started as just audio broadcasts.
Translation capabilities will also turn to AI not only to translate what is being said in videos but to make it appear as though the subject was actually speaking that language.
And in order to fight the proliferation of deepfakes online, YouTube promised that a “likeness detection tool” will soon be available in beta test format that will let creators detect AI-generated videos depicting their impersonators.
The self-taught seismologist: Monitoring earthquakes from optic fibers with AI
Tsinghua University Press
image:
Illustration of how DAS works for earthquake monitoring. An example of DAS data collected in Ridgecrest City, CA is shown on the right.
view moreCredit: Visual Intelligence, Tsinghua University Press
Seismology is undergoing significant change with the rise of Distributed Acoustic Sensing (DAS), a fast-growing technology that leverages existing fiber-optic cables—including those used for the Internet—into ultra-dense seismic networks with meter-scale sensor spacing. DAS provides a scalable and cost-effective way to monitor earthquakes from local to global scales, but it also poses a pressing challenge: the massive volume of data produced outpaces human capacity to analyze. For example, manual labeling earthquake signals is impractical at such scales. This ‘labeled data bottleneck’ has hindered the use of supervised learning models and prevents DAS from reaching its full potential in earthquake monitoring.
A collaborative team from the University of Montreal, Woods Hole Oceanographic Institution, and UC Berkeley has developed a novel model, DASFormer, that learns to monitor earthquakes from continuous DAS data on its own, effectively serving as an ‘artificial seismologist’. Published (DOI: 10.1007/s44267-025-00085-y) in Visual Intelligence on July 15, 2025, the study introduces a self-supervised pretraining framework that can interpret earthquake signals by identifying anomalies without being told in advance what an earthquake looks like. This represents a transformative advance from a labor-intensive, human-dependent process to one that is automated, intelligent, and scalable.
How does DASFormer learn without labels? It acts as a forecaster, first learning to predict the ‘normal’ state of the world. The model trains itself on massive, unlabeled DAS datasets, learning the predictable spatiotemporal patterns of background signals such as traffic vibrations or environmental noise. When an earthquake occurs, its P- and S-phases appear as sharp, unpredictable anomalies that defy the model's predictions learned. By flagging these deviations, DASFormer effectively turns earthquake detection into an anomaly detection task. This is made possible by a two-stage, coarse-to-fine framework built upon Swin U-Net and Convolutional U-Net architectures, which captures both the high-level context and fine-grained detail of the DAS data simultaneously.
To validate its effectiveness, DASFormer was evaluated on a real-world DAS dataset from Ridgecrest, California, and benchmarked against 22 state-of-the-art forecasting and anomaly detection models. DASFormer achieved the highest performance across all evaluation metrics, with a peak ROC-AUC of 0.906 and an F1 score of 0.565, demonstrating its clear superiority.
“Rather than being limited by the time-consuming process of human annotation, DASFormer represents a seismic shift in how we approach earthquake monitoring with DAS”, said Bang Liu, the team leader of the study. “We now have a scalable and powerful tool that can keep pace with the flood of DAS data, paving the way for new possibilities in earthquake science”, added by Zhichao Shen, one of the corresponding authors.
The potential applications of this study are wide-ranging. The model has shown an ability to generalize across distinct environments, such as seafloor cables, highlighting its promise for use in logistically challenging settings. This versatility suggests that DASFormer could serve as a plug-and-play tool for a variety of global seismic monitoring. The study also demonstrates the model's potential to be fine-tuned for downstream tasks such as earthquake early warning. Ultimately, the goal is to leverage this self-supervised approach to build a foundation model for seismic intelligence, a powerful system capable of learning from vast unlabeled datasets to deliver automated, accurate, and scalable monitoring. Such advances could significantly enhance public safety and our understanding of earthquake physics.
Funding information
This work was supported by the Canada CIFAR AI Chair Program and the Canada NSERC Discovery Grant (RGPIN-2021-03115).
About the Authors
Dr. Bang Liu is an Associate Professor in the Department of Computer Science and Operations Research (DIRO) at the University of Montreal (UdeM). He is a member of the RALI laboratory (Applied Research in Computer Linguistics) of DIRO, a member of Institut Courtois of UdeM, an associate member of Mila – Quebec Artificial Intelligence Institute, and a Canada CIFAR AI (CCAI) Chair. His research interests primarily lie in the areas of natural language processing, multimodal & embodied learning, theory and techniques for AGI (e.g., understanding and improving large language models), and AI for science (e.g., health, material science, XR).
Dr. Zhichao Shen is a seismologist and Postdoctoral Investigator at the Department of Geology and Geophysics, Woods Hole Oceanographic Institution. His research interests focus on seismic applications of Distributed Acoustic Sensing (DAS) on both land and seafloor.
About Visual Intelligence
Visual Intelligence is an international, peer-reviewed, open-access journal devoted to the theory and practice of visual intelligence. This journal is the official publication of the China Society of Image and Graphics (CSIG), with Article Processing Charges fully covered by the Society. It focuses on the foundations of visual computing, the methodologies employed in the field, and the applications of visual intelligence, while particularly encouraging submissions that address rapidly advancing areas of visual intelligence research.
Journal
Visual Intelligence
Article Title
DASFormer: self-supervised pretraining for earthquake monitoring
No comments:
Post a Comment