Story by Benjamin Shingler • May 6, 2023
A visitor speaks with a PAL Robotic robot at the Mobile World Congress in Barcelona, Spain, last month. Experts say the Canadian government should strengthen its proposed legislation that would govern emerging AI technologies.© AP
The headlines have been, to say the least, troubling.
Most recently, Geoffrey Hinton, the so-called Godfather of AI, quit his post at Google and warned the rapid advances in artificial intelligence could ultimately pose an existential threat to humankind.
"I think that it's conceivable that this kind of advanced intelligence could just take over from us," the renowned British-Canadian computer scientist told CBC's As It Happens.
"It would mean the end of people."
While such stark comments are impossible to ignore, some experts say they risk obscuring more immediate, practical concerns for Canada.
"Whether deliberately or inadvertently, folks who are talking about the existential risk of AI – even in the negative – are kind of building up and hyping the field," said Luke Stark, an assistant professor of information and media studies at Western University in London, Ont.
"I think it's a bit of a red herring from many of the concerns about the ways these systems are being used by institutions and businesses and governments right now around the world and in Canada."
Stark, who researches the social impacts of technologies such as artificial intelligence, is among the signatories of an open letter critical of the federal government's proposed legislation on artificial intelligence, Bill C27.
The letter argues the government's Artificial Intelligence and Data Act (AIDA), which is part of C27, is too short on details, leaving many important aspects of the rules around AI to be decided after the law is passed.
Look to EU for guidance, experts say
The legislation, tabled last June, recently completed its second reading in the House of Commons and will be sent to committee for study.
In a statement, a spokesperson for Innovation, Science and Economic Development Canada said "the government expects that amendments will be proposed in response to testimony from experts at committee, and is open to considering amendments that would improve the bill."
Experts say other jurisdictions, including the European Union and the United Kingdom, have moved more quickly toward putting in place strong rules governing AI.
Related video: Report: 61% Americans believe AI can threaten humanity (WION)
They cite a long list of human rights and privacy concerns related to the technology, ranging from its use by law enforcement, misinformation and instances where it reinforces patterns of racism and discrimination.
The proposed legislation wouldn't adequately address such concerns, said Maroussia Lévesque, a PhD candidate in law at Harvard University who previously led the AI and human rights file at Global Affairs Canada.
Lévesque described the legislation as an "empty shell" in a recent essay, saying it lacks "basic legal clarity."
In an interview over Zoom, Lévesque held up a draft of the law covered in blue sticky tabs – each one marking an instance where a provision of the law remains undefined.
"This bill leaves really important concepts to be defined later in regulation," she said.
The bill also proposes the creation of a new commissioner to oversee AI and data in Canada, which seems like a positive step on the surface for those hoping for greater oversight.
But Lévesque said the position is a "misnomer," since unlike some other commissioners, the AI and Data appointee won't be an independent agent, heading a regulatory agency.
"From a structural standpoint, it is really problematic," she said.
"You're folding protection into an innovation-driven mission and sometimes these will be at odds. It's like putting the brakes and stepping on the accelerator at the same time."
Lévesque said the EU has a "much more robust scheme," when it comes to proposed legislation on artificial intelligence.
The European Commission began drafting their legislation in 2021 and is nearing the finish line.
Under the legislation, companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems.
Lévesque likened their approach to the checks required before a new airplane or pharmaceutical drug is brought to market.
"It's not perfect — people can disagree about it. But it's on the brink of being adopted now, and it bans certain types of AI systems."
In Stark's view, the Liberal government has put an emphasis on AI as a driver of economic growth and tried to brand Canada as an "ethical AI centre."
"To fulfil the promise of that kind of messaging, I'd like to see the government being much more, broadly, consultative and much more engaged outside the kind of technical communities Montreal, and Toronto that I think have a lot of sway with the government," he said.
'Hurry up and slow down'
The Canadian Civil Liberties Association is among the groups hoping to be heard in this next round of consultations.
"We have not had sufficient input from key stakeholders, minority groups and people who we think are likely to be disproportionately affected by this bill," said Tashi Alford-Duguid, a privacy lawyer with CCLA.
Alford-Duguid said the government needs to take a "hurry up and slow down" approach.
"The U.K. has undertaken much more extensive consultations; we know that the EU is in the midst of very extensive consultations. And while neither of those laws look like they're going to be perfect, the Canadian government is coming in at this late hour, and trying to give us such rushed and ineffective legislation instead," he said.
"We can just look around and see we can already do better than this."
New use for A.I.: correctly estimating fish stocks
First-ever A.I. algorithm correctly estimates fish stocks, could save millions and bridge global data and sustainability divide
Peer-Reviewed PublicationFor the first time, a newly published artificial intelligence (AI) algorithm is allowing researchers to quickly and accurately estimate coastal fish stocks without ever entering the water. This breakthrough could save millions of dollars in annual research and monitoring costs while bringing data access to least-developed countries about the sustainability of their fish stocks.
Understanding “fish stocks” – the amount of living fish found in an area’s waters – is critical to understanding the health of our oceans. This is especially true in coastal areas where 90 percent of people working in the fisheries industry live and work. In the wealthiest countries, millions of dollars are spent each year on “stock assessments” – expensive and labor-intensive efforts to get people and boats out into the water to count fish and calculate stocks. That extremely high cost has long been a barrier for tropical countries in Africa and Asia, home to the highest percentage of people who depend on fishing for food and income. Small-scale fishers working coastal waters in many countries are essentially operating blindly, with no real data about how many fish are available in their fisheries. Without data, coastal communities and their governments cannot create management plans to help keep their oceans healthy and productive for the long-term.
Now, thanks to advances in satellite data and machine learning algorithms, researchers have created a model that has successfully estimated fish stocks with 85 percent accuracy in the Western Indian Ocean pilot region. This tool has the potential to get data quickly and cheaply into the hands of local and national governments, so they can make informed decisions about their natural resources and keep “blue foods” on the table.
“Our goal is to give people the information required to know the status of their fish resources and whether their fisheries need time to recover or not. The long term goal is that they, their children, and their neighbors can find a balance between peoples’ needs and ocean health,” said Tim McClanahan, Director of Marine Science at WCS. “This tool can tell us how fish stocks are doing, and how long it will take for them to recover to healthy levels using various management options. It can also tell you how much money you’re losing or can recoup every year by managing your fishery – and in the Western Indian Ocean region where we piloted this tool, it’s no less than $50 to $150 million each year.”
WCS’ McClanahan and fellow co-authors used years of fish abundance data combined with satellite measurements and an AI tool to produce this model. The result? A simple, easy to use pilot tool to better understand and manage our oceans. With further development, anyone from anywhere in the world would be able to input seven easily accessible data points - things like distance from shore, water temperature, ocean productivity, existing fisheries management, and water depth - and receive back an accurate fish stock estimate for their nearshore ecosystems.
“We know that during times of crisis and hardship, from climate change-induced weather events to the COVID-19 pandemic, people living on the coast increasingly rely on fishing to feed themselves and their families,” said Simon Cripps, Executive Director of Marine Conservation at WCS. “The value of this model is that it tells managers, scientists, and importantly, local communities how healthy a fishery is and how well it can support the communities that depend on it, especially during times of crisis. Once a fishery’s status is known, it gives communities and managers the information to move forward to design solutions to improve fish stocks and improve the resilience of local communities, the fishing industry, and local and national economies.”
The algorithm has been shown to work with high accuracy for coral reef fisheries in the Western Indian Ocean pilot region. WCS is currently seeking new partnerships and funding to scale the tool so it can be deployed and fill critical data gaps around the world.
This work was completed over a number of years and with the support of grants from The Tiffany and Co. Foundation, the John D. and Catherine T. MacArthur Foundation, the Bloomberg Ocean Initiative, the UK Darwin Initiative, and the Western Indian Ocean Marine Science Association’s Marine Science for Management Program (WIOMSA-MASMA).
First-ever A.I. algorithm correctly estimates fish stocks, could save millions and bridge global data and sustainability divide
Peer-Reviewed PublicationFor the first time, a newly published artificial intelligence (AI) algorithm is allowing researchers to quickly and accurately estimate coastal fish stocks without ever entering the water. This breakthrough could save millions of dollars in annual research and monitoring costs while bringing data access to least-developed countries about the sustainability of their fish stocks.
Understanding “fish stocks” – the amount of living fish found in an area’s waters – is critical to understanding the health of our oceans. This is especially true in coastal areas where 90 percent of people working in the fisheries industry live and work. In the wealthiest countries, millions of dollars are spent each year on “stock assessments” – expensive and labor-intensive efforts to get people and boats out into the water to count fish and calculate stocks. That extremely high cost has long been a barrier for tropical countries in Africa and Asia, home to the highest percentage of people who depend on fishing for food and income. Small-scale fishers working coastal waters in many countries are essentially operating blindly, with no real data about how many fish are available in their fisheries. Without data, coastal communities and their governments cannot create management plans to help keep their oceans healthy and productive for the long-term.
Now, thanks to advances in satellite data and machine learning algorithms, researchers have created a model that has successfully estimated fish stocks with 85 percent accuracy in the Western Indian Ocean pilot region. This tool has the potential to get data quickly and cheaply into the hands of local and national governments, so they can make informed decisions about their natural resources and keep “blue foods” on the table.
“Our goal is to give people the information required to know the status of their fish resources and whether their fisheries need time to recover or not. The long term goal is that they, their children, and their neighbors can find a balance between peoples’ needs and ocean health,” said Tim McClanahan, Director of Marine Science at WCS. “This tool can tell us how fish stocks are doing, and how long it will take for them to recover to healthy levels using various management options. It can also tell you how much money you’re losing or can recoup every year by managing your fishery – and in the Western Indian Ocean region where we piloted this tool, it’s no less than $50 to $150 million each year.”
WCS’ McClanahan and fellow co-authors used years of fish abundance data combined with satellite measurements and an AI tool to produce this model. The result? A simple, easy to use pilot tool to better understand and manage our oceans. With further development, anyone from anywhere in the world would be able to input seven easily accessible data points - things like distance from shore, water temperature, ocean productivity, existing fisheries management, and water depth - and receive back an accurate fish stock estimate for their nearshore ecosystems.
“We know that during times of crisis and hardship, from climate change-induced weather events to the COVID-19 pandemic, people living on the coast increasingly rely on fishing to feed themselves and their families,” said Simon Cripps, Executive Director of Marine Conservation at WCS. “The value of this model is that it tells managers, scientists, and importantly, local communities how healthy a fishery is and how well it can support the communities that depend on it, especially during times of crisis. Once a fishery’s status is known, it gives communities and managers the information to move forward to design solutions to improve fish stocks and improve the resilience of local communities, the fishing industry, and local and national economies.”
The algorithm has been shown to work with high accuracy for coral reef fisheries in the Western Indian Ocean pilot region. WCS is currently seeking new partnerships and funding to scale the tool so it can be deployed and fill critical data gaps around the world.
This work was completed over a number of years and with the support of grants from The Tiffany and Co. Foundation, the John D. and Catherine T. MacArthur Foundation, the Bloomberg Ocean Initiative, the UK Darwin Initiative, and the Western Indian Ocean Marine Science Association’s Marine Science for Management Program (WIOMSA-MASMA).
JOURNAL
Marine Policy
Marine Policy
DOI
METHOD OF RESEARCH
Data/statistical analysis
Data/statistical analysis
SUBJECT OF RESEARCH
Not applicable
Not applicable
ARTICLE TITLE
Multivariate environment-fish biomass model informs sustainability and lost income in Indian Ocean coral reefs
Multivariate environment-fish biomass model informs sustainability and lost income in Indian Ocean coral reefs
ARTICLE PUBLICATION DATE
1-Jun-2023
1-Jun-2023
Integrating IoT, AI, and machine learning for next-generation healthcare
Book Announcement The editors have compiled 15 topics that discuss the applications, opportunities, and future trends of machine intelligence in the medical domain. By reading this book, the reader will be familiarized with core principles, algorithms, protocols, emerging trends, security problems, and the latest concepts in e-healthcare services. Moreover, the book's objective is to demonstrate how these technologies can be used to keep patients safe and healthy and, at the same time, empower physicians to deliver superior care.
Key topics covered in the book include an introduction to the concept of the Internet of Medical Things (IoMT), cloud-edge-based IoMT architecture and performance optimization in the context of Medical Big Data, a comprehensive survey on different IoMT interference mitigation techniques for Wireless Body Area Networks (WBANs), artificial intelligence and the Internet of Medical Things, a review of new machine learning and AI solutions in different medical areas, a deep learning based solution to optimize obstacle recognition for visually impaired patients, a survey of the latest breakthroughs in Brain-Computer Interfaces and their applications, deep learning for brain tumour detection, and blockchain and patient data management.
The editors believe that the information in the book is of immense value for researchers and professionals involved in medicine and associated fields. It is a complete package which presents the applications of IoT, AI, and machine learning in healthcare delivery and medical devices.
This book is a timely update for basic and advanced medical, biomedical engineering, and computer science readers. It is an excellent resource for anyone interested in learning about the latest IoT, AI, and machine learning developments for healthcare delivery and medical devices.
For more information on the book, visit the page here.
The editors have compiled 15 topics that discuss the applications, opportunities, and future trends of machine intelligence in the medical domain. By reading this book, the reader will be familiarized with core principles, algorithms, protocols, emerging trends, security problems, and the latest concepts in e-healthcare services. Moreover, the book's objective is to demonstrate how these technologies can be used to keep patients safe and healthy and, at the same time, empower physicians to deliver superior care.
Key topics covered in the book include an introduction to the concept of the Internet of Medical Things (IoMT), cloud-edge-based IoMT architecture and performance optimization in the context of Medical Big Data, a comprehensive survey on different IoMT interference mitigation techniques for Wireless Body Area Networks (WBANs), artificial intelligence and the Internet of Medical Things, a review of new machine learning and AI solutions in different medical areas, a deep learning based solution to optimize obstacle recognition for visually impaired patients, a survey of the latest breakthroughs in Brain-Computer Interfaces and their applications, deep learning for brain tumour detection, and blockchain and patient data management.
The editors believe that the information in the book is of immense value for researchers and professionals involved in medicine and associated fields. It is a complete package which presents the applications of IoT, AI, and machine learning in healthcare delivery and medical devices.
This book is a timely update for basic and advanced medical, biomedical engineering, and computer science readers. It is an excellent resource for anyone interested in learning about the latest IoT, AI, and machine learning developments for healthcare delivery and medical devices.
For more information on the book, visit the page here.
Special topic: Artificial intelligence innovation in remote sensing
Artificial Intelligence (AI) plays a growing role in remote sensing. In particular, during the last decade there has been an exponentially increasing interest in deep learning research for analysis of optical satellite images, hyperspectral images, and radar images. The main reasons for this interest is the increased availability of a wealthy stream of data coming from different Earth observation instruments and that AI techniques enable a learning-based “data model” in remote sensing. In order to promote research in this area, we have organized a special focus on Artificial Intelligence Innovation in Remote Sensing in SCIENCE CHINA Information Sciences(Vol.66, Issue.4, 2023). Eight papers are included in this special focus as detailed below.
Multimodal remote sensing imagery interpretation (MRSII) is an emerging direction in the communities of Earth Observation and Computer Vision. In the contribution entitled “From single- to multi-modal remote sensing imagery interpretation: a survey and taxonomy”, Sun et al. provide a comprehensive overview on the developments of this field. Importantly, in the paper, an easily understandable hierarchical taxonomy is developed for the categorization of MRSII, further providing a systematic discussion on the recent advances and guidance to researchers in many realistic MRSII problems.
Hyperspectral imaging enables the integration of 2D plane imaging and spectroscopy to capture the spectral diagram/signatures and spatial distribution of the objects in the region of interest. However, ground objects and the reflectance received by the imaging instruments may be degraded, owing to environmental disturbances, atmospheric effects and hardware limitations of sensors. HSI restoration aims at reconstructing a high-quality clean hyperspectral image from a degraded one. In the contribution entitled “A survey on hyperspectral image restoration: from the view of low-rank tensor approximation”, Liu et al. present a cutting-edge and comprehensive technical survey of low-rank tensor approximation toward HSI restoration, with a specific focus on denoising, fusion, restriping, inpainting, deblurring and super-resolution, along with their state-of-the-art methods, and quantitative and visual performance assessment.
Recently, hyperspectral and multispectral image fusion (aimed at generating images with both high spectral and spatial resolutions) has been a popular topic. However, it remains a challenging and underdetermined problem. In the contribution entitled “Learning the external and internal priors for multispectral and hyperspectral image fusion”, Li et al. propose two kinds of priors, i.e., external priors and internal priors, to regularize the fusion problem. The external prior represents the general image characteristics and is learned from abundant sample data by using a Gaussian denoising convolutional neural network trained with additional grayscale images. On the other hand, the internal prior represents the unique characteristics of the hyperspectral and multispectral images to be fused. Experiments on simulated and real datasets demonstrate the superiority of the proposed method. The source code for this paper is available at https://github.com/renweidian.
Wide-beam autofocus processing is essential for high-precision imaging of airborne synthetic aperture radar (SAR) data, due to the absence of inertial navigation system/global positioning system (INS/GPS) data or insufficient accuracy. In the contribution entitled “Wide-beam SAR autofocus based on blind resampling”, Chen and Yu propose a full-aperture autofocus method for wide-beam SAR based on blind resampling. The proposed method does not require INS/GPS data as baseline methods, which can significantly improve the overall image quality. The measured data processing results of the wide-beam SAR verify the effectiveness of the newly proposed algorithm in this contribution.
Remote sensing image (RSI) semantic segmentation has attracted increased research interest during the last few years. However, RSI is difficult in holistic processing for currently available graphics processing units cards on account of large field-of-views (FOVs) of the imagery. Furthermore, prevailing practices such as image down sampling and cropping inevitably decrease the quality of semantic segmentation. In the contribution entitled “MFVNet: a deep adaptive fusion network with multiple field-of-views for remote sensing image semantic segmentation”, Li et al. propose a new deep adaptive fusion network with multiple FOVs (MFVNet) for RSI semantic segmentation, surpassing the previous state-of-the-art models on three typical RSI datasets. Codes and pre-trained models for this paper are publicly available https://github.com/weichenrs/MFVNet.
Change detection of buildings, given two registered aerial images captured at different times, aims to detect and localize image regions where buildings have been added or torn down between flyovers is challenging. The main challenges are the mismatch of the nearby buildings and the semantic ambiguity of the building facades. In the contribution entitled “Detecting building changes with off-nadir aerial images”, Pang et al. present a multi-task guided change detection network model, named as MTGCD-Net, providing indispensable and complementary building parsing and matching information, along with extensive comparisons to existing methods. More importantly, a new benchmark dataset, named BANDON, were created fin this research and it is available at https://github.com/fitzpchao/BANDON.
Photovoltaic devices, a typical new energy source, have progressed rapidly and become among the main sources of power generation in the world. In the contribution “AIR-PV: a benchmark dataset for photovoltaic panel extraction in optical remote sensing imagery”, Yan et al. propose a large-scale benchmark dataset, namely AIR-PV, for photovoltaic panel extraction in RS imagery. The main features of this benchmark dataset are: (1) large-scale with wide distribution across five provinces of western China to cover a wide range of geographical styles and background diversity, covering more than 3 million square kilometers with more than 300,000 photovoltaic panels; (2) one of the earliest publicly available datasets (https://github.com/AICyberTeam) for photovoltaic panel extraction, providing a standard data foundation for applying advanced deep learning technology to photovoltaic panel extraction in remote sensing, thereby promoting various social applications related to photovoltaic power.
In the last contribution, “Multi-layer composite autoencoders for semi-supervised change detection in heterogeneous remote sensing images”, Shi et al. develop concise multi-layer composite autoencoders for change detection in heterogeneous remote sensing images, which avoid complex alignment or transformations in the traditional change detection frameworks, which only require 0.1% of true labels (approaching the cost of unsupervised models).
Please find below details of this Special Topic: Artificial Intelligence Innovation in Remote Sensing.
Sun X, Tian Y, Lu W X, et al. From single- to multi-modal remote sensing imagery interpretation: a survey and taxonomy. Sci China Inf Sci, 2023, 66(4): 140301
https://link.springer.com/article/10.1007/s11432-022-3588-0
Liu N, Li W, Wang Y J, et al. A survey on hyperspectral image restoration: from the view of low-rank tensor approximation. Sci China Inf Sci, 2023, 66(4): 140302
https://link.springer.com/article/10.1007/s11432-022-3609-4
Li S T, Dian R W, Liu H B. Learning the external and internal priors for multispectral and hyperspectral image fusion. Sci China Inf Sci, 2023, 66(4): 140303
https://link.springer.com/article/10.1007/s11432-022-3610-5
Chen J L, Yu H W. Wide-beam SAR autofocus based on blind resampling. Sci China Inf Sci, 2023, 66(4): 140304
https://link.springer.com/article/10.1007/s11432-022-3574-7
Li Y S, Chen W, Huang X, et al. MFVNet: a deep adaptive fusion network with multiple field-of-views for remote sensing image semantic segmentation. Sci China Inf Sci, 2023, 66(4): 140305
https://link.springer.com/article/10.1007/s11432-022-3599-y
Pang C, Wu J, Ding J, et al. Detecting building changes with off-nadir aerial images. Sci China Inf Sci, 2023, 66(4): 140306
https://link.springer.com/article/10.1007/s11432-022-3691-4
Yan Z Y, Wang P J, Xu F, et al. AIR-PV: a benchmark dataset for photovoltaic panel extraction in optical remote sensing imagery. Sci China Inf Sci, 2023, 66(4): 140307
https://link.springer.com/article/10.1007/s11432-022-3663-1
Shi J, Wu T C, Yu H W, et al. Multi-layer composite autoencoders for semi-supervised change detection in heterogeneous remote sensing images. Sci China Inf Sci, 2023, 66(4): 140308
https://link.springer.com/article/10.1007/s11432-022-3693-0
JOURNAL
Science China Information Sciences
No comments:
Post a Comment