AI
Study reveals bias in AI tools when diagnosing women’s health issue
Machine learning algorithms designed to diagnose a common infection that affects women showed a diagnostic bias among ethnic groups, University of Florida researchers found.
While artificial intelligence tools offer great potential for improving health care delivery, practitioners and scientists warn of their risk for perpetuating racial inequities. Published Friday in the Nature journal Digital Medicine, this is the first paper to evaluate fairness among these tools in connection to a women’s health issue.
“Machine learning can be a great tool in medical diagnostics, but we found it can show bias toward different ethnic groups,” said Ruogu Fang, an associate professor in the J. Crayton Pruitt Family Department of Biomedical Engineering and the study’s author. “This is alarming for women’s health as there already are existing disparities that vary by ethnicity.”
The researchers evaluated the fairness of machine learning in diagnosing bacterial vaginosis, or BV, a common condition affecting women of reproductive age, which has clear diagnostic differences among ethnic groups.
Fang and co-corresponding author Ivana Parker, both faculty members in the Herbert Wertheim College of Engineering, pulled data from 400 women, comprising 100 from each of the ethnic groups represented — white, Black, Asian, and Hispanic.
In investigating the ability of four machine learning models to predict BV in women with no symptoms, researchers say the accuracy varied among ethnicities. Hispanic women had the most false-positive diagnoses, and Asian women received the most false-negative. Algorithm
“The models performed highest for white women and lowest for Asian women,” said the Parker, an assistant professor of bioengineering. “This tells us machine learning methods are not treating ethnic groups equally well.”
Parker said that while they were interested in understanding how AI tools predict disease for specific ethnicities, their study also helps medical scientists understand the factors associated with bacteria in women of varying ethnic backgrounds, which can lead to improved treatments.
BV, one of the most common vaginal infections, can cause discomfort and pain and happens when natural bacteria levels are out of balance. While there are symptoms associate with BV, many people have no symptoms, making it difficult to diagnose.
It doesn’t often cause complications, but in some cases, BV can increase the risk of sexually transmitted infections, miscarriage, and premature births.
The researchers said their findings demonstrate the need for improved methods for building the AI tools to mitigate health care bias.
JOURNAL
Digital Medicine
ARTICLE TITLE
Ethnic disparity in diagnosing asymptomatic bacterial vaginosis using machine learning
ARTICLE PUBLICATION DATE
17-Nov-2023
Creativity in the age of generative AI: a new era of creative partnerships
Peer-Reviewed Publication
Recent advancements in generative artificial intelligence (AI) have showcased its potential in a wide range of creative activities such as to produce works of art, compose symphonies, and even draft legal texts, slide presentations or the like. These developments have raised concerns that AI will outperform humans in creativity tasks and make knowledge workers redundant. These comments are most recently underlined by a Fortune article entitled ‘Elon Musk says AI will create a future where ‘no job is needed’: ‘The AI will be able to do everything’.
In a new paper in a Nature Human Behavior special issue on AI, researcher Janet Rafner from Aarhus Institute of Advanced Studies and Center for Hybrid Intelligence at Aarhus University and Prof. Jacob Sherson, Director of the Center for Hybrid Intelligence, together with international collaborators discuss research and societal implications of creativity and AI.
The team of researchers argue that we should direct our attention to understanding and nurturing co-creativity, the interaction between humans and machines towards what is termed a ‘human-centered AI’ and ‘hybrid intelligence.’ In this way we will be able to develop interfaces that at the same time ensure both high degrees of automatization through AI and human control and hereby supporting a relationship that optimally empower each other.
Rafner comments: To date, most studies on human-AI co-creativity come from the field of human-computer interaction and focus on the abilities of the AI, and the interaction design and dynamics. While these advances are key for understanding the dynamics between humans and algorithms and human attitudes towards the co-creative process and product, there is an urgent need to enrich these applications with the insights about creativity obtained over the past decades in the psychological sciences.
“Right now, we need to move the conversation away from questions like Can AI be creative? One reason for this is that defining creativity is not cut and dry. When investigating human only, machine only, and human-AI co-creativity, we need to consider the type and level of creativity under question, from everyday creative activities (e.g. making new recipes, artwork or music) that are perhaps more amenable to machine automatization to paradigm-shifting contributions that may require higher-level human intervention. Additionally, it is much more meaningful to consider nuanced questions like, What are the similarities and differences in human cognition, behavior, motivation and self-efficacy between human-AI co-creativity and human creativity?” explains Rafner.
Currently, we do not have sufficient knowledge of co-creativity between human-machines as the delineation between human and AI contributions (and processes) are not always clear. Looking ahead, researchers should balance predictive accuracy with theoretical understanding (i.e., explainability), towards the goal of developing intelligent systems to both measure and enhance human creativity. When designing co-creative systems such as virtual assistants, it will be essential to balance psychometric rigor with ecological validity. That is, co-creativity tasks should combine precise psychological measurement with state-of-the-art intuitive and engaging interface design.
Recent advancements in generative artificial intelligence (AI) have showcased its potential in a wide range of creative activities such as to produce works of art, compose symphonies, and even draft legal texts, slide presentations or the like. These developments have raised concerns that AI will outperform humans in creativity tasks and make knowledge workers redundant. These comments are most recently underlined by a Fortune article entitled ‘Elon Musk says AI will create a future where ‘no job is needed’: ‘The AI will be able to do everything’.
In a new paper in a Nature Human Behavior special issue on AI, researcher Janet Rafner from Aarhus Institute of Advanced Studies and Center for Hybrid Intelligence at Aarhus University and Prof. Jacob Sherson, Director of the Center for Hybrid Intelligence, together with international collaborators discuss research and societal implications of creativity and AI.
The team of researchers argue that we should direct our attention to understanding and nurturing co-creativity, the interaction between humans and machines towards what is termed a ‘human-centered AI’ and ‘hybrid intelligence.’ In this way we will be able to develop interfaces that at the same time ensure both high degrees of automatization through AI and human control and hereby supporting a relationship that optimally empower each other.
Rafner comments: To date, most studies on human-AI co-creativity come from the field of human-computer interaction and focus on the abilities of the AI, and the interaction design and dynamics. While these advances are key for understanding the dynamics between humans and algorithms and human attitudes towards the co-creative process and product, there is an urgent need to enrich these applications with the insights about creativity obtained over the past decades in the psychological sciences.
“Right now, we need to move the conversation away from questions like Can AI be creative? One reason for this is that defining creativity is not cut and dry. When investigating human only, machine only, and human-AI co-creativity, we need to consider the type and level of creativity under question, from everyday creative activities (e.g. making new recipes, artwork or music) that are perhaps more amenable to machine automatization to paradigm-shifting contributions that may require higher-level human intervention. Additionally, it is much more meaningful to consider nuanced questions like, What are the similarities and differences in human cognition, behavior, motivation and self-efficacy between human-AI co-creativity and human creativity?” explains Rafner.
Currently, we do not have sufficient knowledge of co-creativity between human-machines as the delineation between human and AI contributions (and processes) are not always clear. Looking ahead, researchers should balance predictive accuracy with theoretical understanding (i.e., explainability), towards the goal of developing intelligent systems to both measure and enhance human creativity. When designing co-creative systems such as virtual assistants, it will be essential to balance psychometric rigor with ecological validity. That is, co-creativity tasks should combine precise psychological measurement with state-of-the-art intuitive and engaging interface design.
Interdisciplinary collaborations are needed
The challenge of understanding and properly developing human-AI co-creative systems is not to be faced by a single discipline. Business and management scholars should be included to ensure that tasks sufficiently capture real-world professional challenges and to understand the implications of co-creativity for the future of work at macro and micro organizational scales, such as creativity in team dynamics with blended teams of humans and AI. Linguistics and learning scientists are needed to help us understand the impact and nuances of prompt engineering in text-to-x systems. Developmental psychologists will have to study the impact on human learning processes.
The challenge of understanding and properly developing human-AI co-creative systems is not to be faced by a single discipline. Business and management scholars should be included to ensure that tasks sufficiently capture real-world professional challenges and to understand the implications of co-creativity for the future of work at macro and micro organizational scales, such as creativity in team dynamics with blended teams of humans and AI. Linguistics and learning scientists are needed to help us understand the impact and nuances of prompt engineering in text-to-x systems. Developmental psychologists will have to study the impact on human learning processes.
Ethical and meaningful developments
Is not only seen as more ethical to keep humans closely in-the-loop when working and developing AI, but also in most cases it is the most efficient long-term choice, the team of researchers argue.
Beyond this, ethics and legal scholars will have to consider the costs and benefits of co-creativity in terms of intellectual property rights, human sense of purpose, and environmental impact.
Is not only seen as more ethical to keep humans closely in-the-loop when working and developing AI, but also in most cases it is the most efficient long-term choice, the team of researchers argue.
Beyond this, ethics and legal scholars will have to consider the costs and benefits of co-creativity in terms of intellectual property rights, human sense of purpose, and environmental impact.
Access the full scientific paper
A position paper in Nature Human Behavior in their special issue on AI:
‘Creativity in the age of generative AI’ by Rafner, J., Beaty, R., Kaufman, J.C., Lubart, T., J., Sherson in: Nature Human Behaviour, 20 November 2023:
LINK
A position paper in Nature Human Behavior in their special issue on AI:
‘Creativity in the age of generative AI’ by Rafner, J., Beaty, R., Kaufman, J.C., Lubart, T., J., Sherson in: Nature Human Behaviour, 20 November 2023:
LINK
JOURNAL
Nature Human Behaviour
Nature Human Behaviour
DOI
METHOD OF RESEARCH
Commentary/editorial
Commentary/editorial
SUBJECT OF RESEARCH
Not applicable
Not applicable
ARTICLE TITLE
Creativity in the age of generative AI
Creativity in the age of generative AI
ARTICLE PUBLICATION DATE
20-Nov-2023
20-Nov-2023
AI can 'lie and BS' like its maker, but still not intelligent like humans
Paper by UC’s Anthony Chemero explains AI thinking as opposed to human thinking
The emergence of artificial intelligence has caused differing reactions from tech leaders, politicians and the public. While some excitedly tout AI technology such as ChatGPT as an advantageous tool with the potential to transform society, others are alarmed that any tool with the word “intelligent” in its name also has the potential to overtake humankind.
The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, contends that the understanding of AI is muddled by linguistics: That while indeed intelligent, AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”
According to our everyday use of the word, AI is definitely intelligent, but there are intelligent computers and have been for years, Chemero explains in a paper he co-authored in the journal Nature Human Behaviour (full, free version here). To begin, the paper states that ChatGPT and other AI systems are large language models (LLM), trained on massive amounts of data mined from the internet, much of which shares the biases of the people who post the data.
“LLMs generate impressive text, but often make things up whole cloth,” he states. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”
The people who made LLMs call it “hallucinating” when they make things up; although Chemero says, “it would be better to call it ‘bullsh*tting,’” because LLMs just make sentences by repeatedly adding the most statistically likely next word — and they don’t know or care whether what they say is true.
And with a little prodding, he says, one can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.”
The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.
“This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.
The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us. We are committed to our survival. We care about the world we live in”.
#
JOURNAL
Nature Human Behaviour
METHOD OF RESEARCH
Commentary/editorial
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
LLMs differ from human cognition because they are not embodied
AI finds formula on how to predict monster waves
Using 700 years’ worth of wave data from more than a billion waves, scientists at the University of Copenhagen and University of Victoria have used artificial intelligence to find a formula for how to predict the occurrence of these maritime monsters
Peer-Reviewed PublicationLong considered myth, freakishly large rogue waves are very real and can split apart ships and even damage oil rigs. Using 700 years’ worth of wave data from more than a billion waves, scientists at the University of Copenhagen and University of Victoria have used artificial intelligence to find a formula for how to predict the occurrence of these maritime monsters. The new knowledge can make shipping safer.
EMBARGOED CONTENT UNTIL MONDAY 20 NOVEMBER 2023 3 PM US EASTERN TIME
Stories about monster waves, called rogue waves, have been the lore of sailors for centuries. But when a 26-metre-high rogue wave slammed into the Norwegian oil platform Draupner in 1995, digital instruments were there to capture and measure the North Sea monster. It was the first time that a rogue had been measured and provided scientific evidence that abnormal ocean waves really do exist.
Since then, these extreme waves have been the subject of much study. And now, researchers from the University of Copenhagen’s Niels Bohr Institute have used AI methods to discover a mathematical model that provides a recipe for how – and not least when – rogue waves can occur.
With the help of enormous amounts of big data about ocean movements, researchers can predict the likelihood of being struck by a monster wave at sea at any given time.
"Basically, it is just very bad luck when one of these giant waves hits. They are caused by a combination of many factors that, until now, have not been combined into a single risk estimate. In the study, we mapped the causal variables that create rogue waves and used artificial intelligence to gather them in a model which can calculate the probability of rogue wave formation," says Dion Häfner.
Häfner is a former PhD student at the Niels Bohr Institute and first author of the scientific study, which has just been published in the prestigious journal Proceedings of the National Academy of Sciences (PNAS).
Rogue waves happen every day
In their model, the researchers combined available data on ocean movements and the sea state, as well as water depths and bathymetric information. Most importantly, wave data was collected from buoys in 158 different locations around US coasts and overseas territories that collect data 24 hours a day. When combined, this data – from more than a billion waves – contains 700 years’ worth of wave height and sea state information.
The researchers analyzed the many types of data to find the causes of rogue waves, defined as being waves that are at least twice as high as the surrounding waves – including extreme rogue waves that can be over 20 meters high. With machine learning, they transformed it all into an algorithm that was then applied to their dataset.
"Our analysis demonstrates that abnormal waves occur all the time. In fact, we registered 100,000 waves in our dataset that can be defined as rogue waves. This is equivalent around 1 monster wave occurring every day at any random location in the ocean. However, they aren’t all monster waves of extreme size," explains Johannes Gemmrich, the study’s second author.
Artificial intelligence as a scientist In the study, the researchers were helped by artificial intelligence. They used several AI methods, including symbolic regression which gives an equation as output, rather than just returning a single prediction as traditional AI methods do. By examining more than 1 billion waves, the researchers' algorithm has analyzed its own way into finding the causes of rogue waves and condensed it into equation that describes the recipe for a rogue wave. The AI learns the causality of the problem and communicates that causality to humans in the form of an equation that researchers can analyze and incorporate into their future research. "Over decades, Tycho Brahe collected astronomical observations from which Kepler, with lots of trial and error, was able to extract Kepler's Laws. Dion used machines to do with waves what Kepler did with planets. For me, it is still shocking that something like this is possible," says Markus Jochum. |
Phenomenon known since the 1700s
The new study also breaks with the common perception of what causes rogue waves. Until now, it was believed that the most common cause of a rogue wave was when one wave briefly combined with another and stole its energy, causing one big wave to move on.
However, the researchers establish that the most dominant factor in the materialization of these freak waves is what is known as "linear superposition". The phenomenon, known about since the 1700s, occurs when two wave systems cross over each other and reinforce one another for a brief period of time.
"If two wave systems meet at sea in a way that increases the chance to generate high crests followed by deep troughs, the risk of extremely large waves arises. This is knowledge that has been around for 300 years and which we are now supporting with data," says Dion Häfner.
Safer shipping
The researchers' algorithm is good news for the shipping industry, which at any given time has roughly 50,000 cargo ships sailing around the planet. Indeed, with the help of the algorithm, it will be possible to predict when this "perfect" combination of factors is present to elevate the risk of a monster wave that could pose a danger for anyone at sea.
"As shipping companies plan their routes well in advance, they can use our algorithm to get a risk assessment of whether there is a chance of encountering dangerous rogue waves along the way. Based on this, they can choose alternative routes," says Dion Häfner.
Both the algorithm and research are publicly available, as are the weather and wave data deployed by the researchers. Therefore, Dion Häfner says that interested parties, such as public authorities and weather services, can easily begin calculating the probability of rogue waves. And unlike many other models created using artificial intelligence, all of the intermediate calculations in the researchers' algorithm are transparent.
"AI and machine learning are typically black boxes that don't increase human understanding. But in this study, Dion used AI methods to transform an enormous database of wave observations into a new equation for the probability of rogue waves, which can be easily understood by people and related to the laws of physics," concludes Professor Markus Jochum, Dion’s thesis supervisor and co-author.
Links:
Read the scientific paper “Machine-Guided Discovery of a Real-World Rogue Wave Model” published in PNAS: https://www.pnas.org/cgi/doi/10.1073/pnas.2306275120
Read the Wikipedia-list of registered rogue waves: https://en.wikipedia.org/wiki/List_of_rogue_waves
Dion Häfner’s research continues at Pasteur Labs.
JOURNAL
Proceedings of the National Academy of Sciences
ARTICLE TITLE
Machine-Guided Discovery of a Real-World Rogue Wave Model
ARTICLE PUBLICATION DATE
20-Nov-2023
AI: Researchers develop automatic text recognition for ancient cuneiform tablets
A new artificial intelligence (AI) software is now able to decipher difficult-to-read texts on cuneiform tablets. It was developed by a team from Martin Luther University Halle-Wittenberg (MLU), Johannes Gutenberg University Mainz, and Mainz University of Applied Sciences. Instead of photos, the AI system uses 3D models of the tablets, delivering significantly more reliable results than previous methods. This makes it possible to search through the contents of multiple tablets to compare them with each other. It also paves the way for entirely new research questions.
In their new approach, the researchers used 3D models of nearly 2,000 cuneiform tablets, including around 50 from a collection at MLU. According to estimates, around one million such tablets still exist worldwide. Many of them are over 5,000 years old and are thus among mankind’s oldest surviving written records. They cover an extremely wide range of topics: "Everything can be found on them: from shopping lists to court rulings. The tablets provide a glimpse into mankind’s past several millennia ago. However, they are heavily weathered and thus difficult to decipher even for trained eyes," says Hubert Mara, an assistant professor at MLU.
This is because the cuneiform tablets are unfired chunks of clay into which writing has been pressed. To complicate matters, the writing system back then was very complex and encompassed several languages. Therefore, not only are optimal lighting conditions needed to recognise the symbols correctly, a lot of background knowledge is required as well. "Up until now it has been difficult to access the content of many cuneiform tablets at once - you sort of need to know exactly what you are looking for and where," Mara adds.
His lab came up with the idea of developing a system of artificial intelligence which is based on 3D models. The new system deciphers characters better than previous methods. In principle, the AI system works along the same lines as OCR software (optical character recognition), which converts the images of writing and text in into machine-readable text. This has many advantages. Once converted into computer text, the writing can be more easily read or searched through. "OCR usually works with photographs or scans. This is no problem for ink on paper or parchment. In the case of cuneiform tablets, however, things are more difficult because the light and the viewing angle greatly influence how well certain characters can be identified," explains Ernst Stötzner from MLU. He developed the new AI system as part of his master’s thesis under Hubert Mara.
Some of the cuneiform tablets are only a few centimeters in size.
Scan of a tablet
CREDIT
Uni Halle / Maike Glöckner
The team trained the new AI software using three-dimensional scans and additional data. Much of this data was provided by Mainz University of Applied Sciences, which is overseeing a large edition project for 3D models of clay tablets. The AI system subsequently did succeed in reliably recognising the symbols on the tablets. "We were surprised to find that our system even works well with photographs, which are actually a poorer source material," says Stötzner.
The work by the researchers from Halle and Mainz provides new access to what has hitherto been a relatively exclusive material and opens up many new lines of inquiry. Up until now it has only been a prototype which is able to reliably discern symbols from two languages. However, a total of twelve cuneiform languages are known to exist. In the future, the software could also help to decipher weathered inscriptions, for example in cemeteries, which are three-dimensional like the cuneiform script.
The scientists have already presented their work at several internationally renowned conferences, most recently at the International Conference on Computer Vision. A few weeks ago, the team received the "Best Paper Award" at the Graphics and Cultural Heritage Conference.
Paper: Stötzner E., Homburg T., Bullenkamp J.P. & Mara H. R-CNN based Polygonal Wedge Detection Learned from Annotated 3D Renderings and Mapped Photographs of Open Data Cuneiform Tablets. GCH 2023 - Eurographics Workshop on Graphics and Cultural Heritage. doi: 10.2312/gch.20231157
ARTICLE TITLE
R-CNN based PolygonalWedge Detection Learned from Annotated 3D Renderings and Mapped Photographs of Open Data Cuneiform Tablets
AI-powered crab gender identification: revolutionizing fishery management and conservation
Deep learning model developed by researchers outperform human fishermen in correctly identifying the gender of horsehair crabs
When winter comes to Japan, fishermen in the northern regions set out to capture one of the most anticipated seasonal delicacies: the horsehair crab. Known locally as “kegani” and bearing the scientific name Erimacrus isenbeckii, this species of crustacean is highly sought after throughout the country. To protect the horsehair crab population from overfishing, the Japanese and prefectural governments have implemented various restrictions on their capture. For example, in Hokkaido, where kegani is abundant, capturing females for consumption is strictly prohibited.
To comply with these laws, experienced fishermen have learned how to tell apart males from females through visual inspection. While it is relatively straightforward to distinguish them by looking at the underside (abdomen) of the crabs, doing so by looking at their shell side is much more challenging. Unfortunately, when captured crabs settle on board a ship, they almost always do so with their shell side pointing up, and picking them up and flipping them individually to determine their sex is time-consuming.
Could this be yet another task artificial intelligence (AI) may excel at? In a recent study, a research team from Japan, including Professor Shin-ichi Satake from Tokyo University of Science (TUS), Japan, sought to answer this question using deep learning. Their latest paper, published in the renowned journal Scientific Reports, is co-authored by Associate Professor Yoshitaka Ueki and Professor Ken Takeuchi from TUS and Assistant Professor Kenji Toyota and Professor Tsuyoshi Ohira from Kanagawa University.
The researchers implemented three deep convolutional neural networks based on three well-established image classification algorithms: AlexNet, VGG-16, and ResNet-50. To train and test these models, they used 120 images of horsehair crabs captured in Hokkaido; half of them were males, and the other half were females. A notable advantage of these models is that they are “explainable AI.” Simply put, it means given an image of a crab, it is possible to see what specific regions of the image were relevant for the algorithm to make its classification decision. This can reveal subtle differences between the males and females that could be useful for manual classification.
The test results were quite promising in terms of accuracy and performance metrics, as Prof. Satake highlights: “Even though gender classification was virtually impossible by human visual inspection on the shell side, the proposed deep learning models enabled male and female classification with high precision, achieving an F-1 measure of approximately 95% and similarly high accuracy values.” This means that the AI approach vastly outperformed humans and provided consistent, reliable classification.
Interestingly, when observing the heatmaps, which represented the regions the models focused on for classification, the team found significant differences between the sexes. For one, the heatmap was enhanced near the genitalia shape on the abdomen side. When classifying males, the algorithms focused on the lower part of the carapace. In contrast, when classifying females, the algorithms focused on the upper portion of the carapace. This could provide useful information not only for the development of future AI sex classification models for crabs but also shed light on how experienced fishermen can tell apart males from females apart even when looking at their shell side.
Considering that being captured can be a great source of stress for crabs, being able to quickly tell females apart without flipping them before release could help prevent health or reproductive problems for these crabs. Thus, deep learning could potentially be an important tool for enhancing conservation and farming efforts. “The fact that deep learning can discriminate male and female crabs is an important finding not only for the conservation of these important marine resources but also for the development of efficient aquaculture techniques,” remarks Prof. Satake.
Notably, implementing AI classification techniques directly on ships could reduce the amount of manual work and make crab fishing more cost-effective. Moreover, the proposed models could be retrained and repurposed for the gender classification of other species of crabs, such as the blue crab or the Dungeness crab.
Overall, this study showcases how AI can be leveraged in creative ways to not only make people’s work more efficient but also have a direct positive effect on conservation, responsible fishing, and sustainability of crab aquaculture.
***
Reference
DOI: https://doi.org/10.1038/s41598-023-46606-x
About The Tokyo University of Science
Tokyo University of Science (TUS) is a well-known and respected university, and the largest science-specialized private research university in Japan, with four campuses in central Tokyo and its suburbs and in Hokkaido. Established in 1881, the university has continually contributed to Japan's development in science through inculcating the love for science in researchers, technicians, and educators.
With a mission of “Creating science and technology for the harmonious development of nature, human beings, and society," TUS has undertaken a wide range of research from basic to applied science. TUS has embraced a multidisciplinary approach to research and undertaken intensive study in some of today's most vital fields. TUS is a meritocracy where the best in science is recognized and nurtured. It is the only private university in Japan that has produced a Nobel Prize winner and the only private university in Asia to produce Nobel Prize winners within the natural sciences field.
Website: https://www.tus.ac.jp/en/mediarelations/
About Professor Shin-ichi Satake from Tokyo University of Science
Dr. Shin-ichi Satake obtained a PhD degree in Mechano-Informatics Engineering from The University of Tokyo in 1995. He currently serves as a Full Professor of the Department of Applied Electronics at the Faculty of Advanced Engineering of Tokyo University of Science. His research interests focus mainly on simulation engineering and thermal engineering, particularly computational thermal fluid dynamics. He has published over 120 peer-reviewed papers on these topics.
About Associate Professor Yoshitaka Ueki from Tokyo University of Science
Dr. Yoshitaka Ueki obtained a PhD degree in Engineering from Kyoto University in 2012. He currently serves as an Associate Professor of the Department of Applied Electronics at the Faculty of Advanced Engineering of Tokyo University of Science. His research interests focus on data processing, machine learning, and acoustic engineering.
JOURNAL
Scientific Reports
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Animals
ARTICLE TITLE
Gender identification of the horsehair crab, Erimacrus isenbeckii (Brandt, 1848), by image recognition with a deep neural network
No comments:
Post a Comment