Thursday, April 25, 2019

APRIL 28 DAY OF MOURNING CANADA / UN





APRIL 28 WORKERS MEMORIAL DAY USA






WOMEN ARE THE PROLETARIAT


Whose Family Values?

Women and the Social Reproduction of Capitalism









APRIL 25, 2019

Gestures and visual animations
reveal cognitive origins of linguistic meaning

by New York University

Credit: CC0 Public Domain

Gestures and visual animations can help reveal the cognitive origins of meaning, indicating that our minds can assign a linguistic structure to new informational content "on the fly"—even if it is not linguistic in nature.


These conclusions stem from two studies, one in linguistics and the other in experimental psychology, appearing in Natural Language & Linguistic Theory and Proceedings of the National Academy of Sciences (PNAS).

"These results suggest that far less is encoded in words than was originally thought," explains Philippe Schlenker, a senior researcher at Institut Jean-Nicod within France's National Center for Scientific Research (CNRS) and a Global Distinguished Professor at New York University, who wrote the first study and co-authored the second. "Rather, our mind has a 'meaning engine' that can apply to linguistic and non-linguistic material alike.

"Taken together, these findings provide new insights into the cognitive origins of linguistic meaning."

Contemporary linguistics has established that language conveys information through a highly articulated typology of inferences. For instance, I have a dog asserts that I own a dog, but it also suggests (or "implicates") that I have no more than one: the hearer assumes that if I had two dogs, I would have said so (as I have two dogs is more informative).

Unlike asserted content, implicated content isn't targeted by negation. I don't have a dog thus means that I don't have any dog, not that I don't have exactly one dog. There are further inferential types characterized by further properties: the sentence I spoil my dog still conveys that I have a dog, but now this is neither asserted nor implicated; rather, it is "presupposed"—i.e. taken for granted in the conversation. Unlike asserted and implicated information, presuppositions are preserved in negative statements, and thus I don't spoil my dog still presupposes that I have a dog.

A fundamental question of contemporary linguistics is: Which of these inferences come from arbitrary properties of words stored in our mental dictionary and which result from general, productive processes?

In the Natural Language & Linguistic Theory work and the PNASstudy, written by Lyn Tieu of Australia's Western Sydney University, Schlenker, and CNRS's Emmanuel Chemla, the authors argue that nearly all inferential types result from general, and possibly non-linguistic, processes.


Their conclusion is based on an understudied type of sentence containing gestures that replace normal words. For instance, in the sentence You should UNSCREW-BULB, the capitalized expression encodes a gesture of unscrewing a bulb from the ceiling. While the gesture may be seen for the first time (and thus couldn't be stored in our mental dictionary), it is understood due to its visual content.

This makes it possible to test how its informational content (i.e. unscrewing a bulb that's on the ceiling) is divided on the fly among the typology of inferences. In this case, the unscrewing action is asserted, but the presence of a bulb on the ceiling is presupposed, as shown by the fact that the negation (You shouldn't UNSCREW-BULB) preserves this information. By systematically investigating such gestures, the Natural Language & Linguistic Theory study reaches a ground-breaking conclusion: nearly all inferential types (eight in total) can be generated on the fly, suggesting that all are due to productive processes.

The PNAS study investigates four of these inferential types with experimental methods, confirming the results of the linguistic study. But it also goes one step further by replacing the gestures with visual animations embedded in written texts, thus answering two new questions: First, can the results be reproduced for visual stimuli that subjects cannot possibly have seen in a linguistic context, given that people routinely speak with gestures but not with visual animations? Second, can entirely non-linguistic material be structured by the same processes?

Both answers are positive.

In a series of experiments, approximately 100 subjects watched videos of sentences in which some words were replaced either by gestures or by visual animations. They were asked how strongly they derived various inferences that are the hallmarks of different inferential types (for instance, inferences derived in the presence of negation). The subjects' judgments displayed the characteristic signature of four classic inferential types (including presuppositions and implicated content) in gestures but also in visual animations: the informational content of these non-standard expressions was, as expected, divided on the fly by the experiments' subjects among well-established slots of the inferential typology.


Explore furtherChimp communication gestures found to follow human linguistics rules
More information: Lyn Tieu et al, Linguistic inferences without words, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1821018116
Journal information: Proceedings of the National Academy of Sciences


Provided by New York University

SAFE HEARING PPE

Agronomy Research 12(3), 895–906, 2014


Exposure to high or low frequency noise at workplaces: differences between assessment, health complaints and implementation of adequate personal protective equipment (PPE)


K.Reinhold*, S.Kalle and J.Paju Institute of Business Administration, Tallinn University of Technology, Ehitajate tee 5, EE12618 Tallinn, Estonia

Abstract

Employees are exposed to high and low frequency noise which may cause different health effects. Hearing loss first occurs in the high frequency range, low frequency usually causes sleeping disturbances and annoyance. TES 1358 sound analyzer with 1/3 octave band was used to measure the equivalent sound pressure level, the peak sound pressure level, and the noise frequency spectrum at different workplaces. All the results were compared to Estonian and International legislations. High frequency noise was studied in metal, electronics and wood processing industries. The results showed that in several cases, the normative values were exceeded and the highest values appeared in the range of speech frequencies. Frequency analysis indicated that the noise level spectra at work stations of various machines differed in patterns. The low frequency spectra on a ship showed peaks in the frequency range of 50...1,250 Hz. Most employers provided workers with personal protective equipment against noise, but when selecting ear muffs, noise frequency had not been taken into consideration and therefore workers in the same enterprise used similar ear muffs. Knowledge of the prevailing frequencies assists to decide which ear protection should be used to avoid damage. An adequate hearing protector device can reduce the noise exposure significantly. Key words: Noise, frequency analysis, PPE, occupational hazards. 

INTRODUCTION 

The human perception of sound is between 20...20,000 Hz. The ear is most receptive in the range of 500...8,000 Hz, so called acoustical window, even though the most sensitive range of hearing is 1,000...4,000 Hz (Salvendy, 2012) and the spectrum of human speech ranges in the frequency region of 250...6300 Hz (Cox & Moore, 1988). Health effects from noise exposure have been studied by many researchers. Differences in complaints between low (20...500 Hz) (Alves-Pereira & Castelo Branco, 2007) and high frequency noise have been presented in several sources. Also it has been indicated, that hearing loss tends to occur in the range of high frequencies first (Salvendy, 2012). Industrial noise can mainly be characterized with high frequency noise, but also a considerable number of workers are exposed to low frequency noise on a daily basis. There is a general agreement that progression in hearing loss at frequencies of 500, 1,000, 2,000, and 3,000 Hz eventually will result in impaired hearing, i.e. inability to hear and understand speech (Johnson et al., 2001).



CANADIAN GUIDELINES FOR CONCUSSION




APRIL 25, 2019
Sound of the sea solves decades-old supervolcano mystery

by University of Aberdeen
Solfatara is a shallow volcanic crater located at the centre of Campi Flegrei, 
where volcanic material is emitted through steaming vents. 
Credit: University of Aberdeen

Scientists have used the sound of the sea to discover the route taken by hot fluids that feed a supervolcano in southern Italy.

Using an innovative technique that uses the 'hum' - or seismic noise—of waves crashing at the coastline of Campi Flegrei, scientists have produced a seismic image of the deeper structure of the volcano that reveals the main route bringing hot fluids to the surface.

Their research has featured in a documentary—'The Next Pompeii'—on Nova, a popular science series on major US broadcaster PBS. The documentary highlights the innovative scientific techniques being used to monitor Campi Flegrei – a volcanic caldera to the west of Naples that last erupted five centuries ago.

The area has been relatively quiet since the 1980s, when the injection of volcanic material in the shallower structure of the volcano caused thousands of small earthquakes, which was followed by 38 years of relative silence.

Seismic imaging is one of the main methods used by scientists to accurately map a volcano's structure at depth, however the low level of seismic activity in the area over nearly four decades has meant that Campi Flegrei's inner structure has remained a mystery – until now.

The so-called 'feeder pathway' discovered by scientists is believed to have been formed during the last period of seismic activity in the 1980s, and brings volcanic material from the depths of the volcano, located out at sea.

The material then travels up and along established routes beneath the volcano towards fumaroles at Solfatara and Pisciarelli—located approximately in the centre of the caldera—where they are expelled as vapour through steaming vents.

Seismologists Professor Luca De Siena, Dr. Carmelo Sammarco and Dr. David Cornwell led the study from the School of Geosciences at the University of Aberdeen. They worked alongside the Vesuvius Observatory, which advises the Italian Government's Department of Civil Protection of the threat posed by volcanic activity in the region.

Professor De Siena, now at the University of Mainz, said: "By using the noise at the seashore to create a seismic image, we have finally a better idea of how volcanic material travels from the depths of the volcano to the surface,"

"This is the first time this relatively new technique has been used in a heavily populated area, and it shows us that the feeder pathway created at the beginning of the 1980s appears fully functional in 2011-2013, when we collected the data.

"This is important as it improves our understanding of the character of the volcano, which may ultimately improve monitoring and early warning procedures in an area inhabited by millions of people."


Explore furtherScientists locate potential magma source in Italian supervolcano
More information: L. De Siena et al. Ambient Seismic Noise Image of the Structurally Controlled Heat and Fluid Feeder Pathway at Campi Flegrei Caldera, Geophysical Research Letters (2018). DOI: 10.1029/2018GL078817
Journal information: Geophysical Research Letters


Provided by University of Aberdeen


APRIL 17, 2019
Artificial intelligence speeds efforts to develop clean, virtually limitless fusion energy

by John Greenwald, Princeton Plasma Physics Laboratory
Depiction of fusion research on a doughnut-shaped tokamak enhanced 
by artificial intelligence. Credit: Eliot Feibush/PPPL and 
Julian Kates-Harbeck/Harvard University

Artificial intelligence (AI), a branch of computer science that is transforming scientific inquiry and industry, could now speed the development of safe, clean and virtually limitless fusion energy for generating electricity. A major step in this direction is under way at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University, where a team of scientists working with a Harvard graduate student is for the first time applying deep learning—a powerful new version of the machine learning form of AI—to forecast sudden disruptions that can halt fusion reactions and damage the doughnut-shaped tokamaks that house the reactions.

Promising new chapter in fusion research

"This research opens a promising new chapter in the effort to bring unlimited energy to Earth," Steve Cowley, director of PPPL, said of the findings, which are reported in the current issue of Naturemagazine. "Artificial intelligence is exploding across the sciences and now it's beginning to contribute to the worldwide quest for fusion power."

Fusion, which drives the sun and stars, is the fusing of light elements in the form of plasma—the hot, charged state of matter composed of free electrons and atomic nuclei—that generates energy. Scientists are seeking to replicate fusion on Earth for an abundant supply of power for the production of electricity.

Crucial to demonstrating the ability of deep learning to forecast disruptions—the sudden loss of confinement of plasma particles and energy—has been access to huge databases provided by two major fusion facilities: the DIII-D National Fusion Facility that General Atomics operates for the DOE in California, the largest facility in the United States, and the Joint European Torus (JET) in the United Kingdom, the largest facility in the world, which is managed by EUROfusion, the European Consortium for the Development of Fusion Energy. Support from scientists at JET and DIII-D has been essential for this work.

The vast databases have enabled reliable predictions of disruptions on tokamaks other than those on which the system was trained—in this case from the smaller DIII-D to the larger JET. The achievement bodes well for the prediction of disruptions on ITER, a far larger and more powerful tokamak that will have to apply capabilities learned on today's fusion facilities.

The deep learning code, called the Fusion Recurrent Neural Network (FRNN), also opens possible pathways for controlling as well as predicting disruptions.

Most intriguing area of scientific growth

"Artificial intelligence is the most intriguing area of scientific growth right now, and to marry it to fusion science is very exciting," said Bill Tang, a principal research physicist at PPPL, coauthor of the paper and lecturer with the rank and title of professor in the Princeton University Department of Astrophysical Sciences who supervises the AI project. "We've accelerated the ability to predict with high accuracy the most dangerous challenge to clean fusion energy."


Unlike traditional software, which carries out prescribed instructions, deep learning learns from its mistakes. Accomplishing this seeming magic are neural networks, layers of interconnected nodes—mathematical algorithms—that are "parameterized," or weighted by the program to shape the desired output. For any given input the nodes seek to produce a specified output, such as correct identification of a face or accurate forecasts of a disruption. Training kicks in when a node fails to achieve this task: the weights automatically adjust themselves for fresh data until the correct output is obtained.

A key feature of deep learning is its ability to capture high-dimensional rather than one-dimensional data. For example, while non-deep learning software might consider the temperature of a plasma at a single point in time, the FRNN considers profiles of the temperature developing in time and space. "The ability of deep learning methods to learn from such complex data make them an ideal candidate for the task of disruption prediction," said collaborator Julian Kates-Harbeck, a physics graduate student at Harvard University and a DOE-Office of Science Computational Science Graduate Fellow who was lead author of the Nature paper and chief architect of the code.

Training and running neural networks relies on graphics processing units (GPUs), computer chips first designed to render 3-D images. Such chips are ideally suited for running deep learning applications and are widely used by companies to produce AI capabilities such as understanding spoken language and observing road conditions by self-driving cars.

Kates-Harbeck trained the FRNN code on more than two terabytes (1012) of data collected from JET and DIII-D. After running the software on Princeton University's Tiger cluster of modern GPUs, the team placed it on Titan, a supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility, and other high-performance machines.

A demanding task

Distributing the network across many computers was a demanding task. "Training deep neural networks is a computationally intensive problem that requires the engagement of high-performance computing clusters," said Alexey Svyatkovskiy, a coauthor of the Nature paper who helped convert the algorithms into a production code and now is at Microsoft. "We put a copy of our entire neural network across many processors to achieve highly efficient parallel processing," he said.

The software further demonstrated its ability to predict true disruptions within the 30-millisecond time frame that ITER will require, while reducing the number of false alarms. The code now is closing in on the ITER requirement of 95 percent correct predictions with fewer than 3 percent false alarms. While the researchers say that only live experimental operation can demonstrate the merits of any predictive method, their paper notes that the large archival databases used in the predictions, "cover a wide range of operational scenarios and thus provide significant evidence as to the relative strengths of the methods considered in this paper."

From prediction to control

The next step will be to move from prediction to the control of disruptions. "Rather than predicting disruptions at the last moment and then mitigating them, we would ideally use future deep learning models to gently steer the plasma away from regions of instability with the goal of avoiding most disruptions in the first place," Kates-Harbeck said. Highlighting this next step is Michael Zarnstorff, who recently moved from deputy director for research at PPPL to chief science officer for the laboratory. "Control will be essential for post-ITER tokamaks—in which disruption avoidance will be an essential requirement," Zarnstorff noted.

Progressing from AI-enabled accurate predictions to realistic plasma control will require more than one discipline. "We will combine deep learning with basic, first-principle physics on high-performance computers to zero in on realistic control mechanisms in burning plasmas," said Tang. "By control, one means knowing which 'knobs to turn' on a tokamak to change conditions to prevent disruptions. That's in our sights and it's where we are heading."


APRIL 25, 2019 REPORT

China's efforts to reduce air pollution in major cities found to increase pollution in nearby areas

by Bob Yirka , Phys.org
Credit: CC0 Public Domain

A team of researchers with members affiliated with institutions in China, the Netherlands, Czech Republic, the U.S. and Austria has found that efforts by the Chinese government to reduce air pollution in its major cities have resulted in higher air pollution levels in nearby areas. The group has published a paper describing their findings in the journal Science Advances.


Over the past several decades, China has become a major manufacturing powerhouse, but in doing so, has put the health of its urban citizens at risk due to severe air pollution. The pollution mainly comes from factory smokestacks. The problem was highlighted back in 2008 as viewers of the Beijing Olympics saw dense clouds of pollution blanketing major parts of the city. Since then, the Chinese government has instituted policies and rules governing the amount of pollutants a company can emit. The results have been promising—pollution levels have diminished. But it appears the problem has been shifted rather than solved, the researchers on this new effort report. They took and tested air samples from a large number of sites just outside of the big metropolitan areas and found huge increases in air pollution levels.

The researchers note that many of the rules surrounding pollution limits are localized in China. This means that companies that find themselves emitting over the limit can simply move to a nearby area that falls under a different, less strict, jurisdiction. They note also that quite often those people in charge of making rules about air pollution outside of the metropolitan areas are much laxer about pollutants because they hope to attract companies that will employ people who live there.

In testing the air in areas some distance from cities such as Beijing, the researchers found that, on average, particulate matter was 1.6 times higher than the amount of reductions seen in the cities, which shows that the country is actually producing more of it than ever. They also found that the lax rules outside of metropolitan areas led to overall emission levels that were 3.6 times higher than they were before the new urban rules were put in place. And they found that overall water consumption was 2.9 times higher as well. They also discovered that occasionally the winds shifted, pushing the pollution from the new manufacturing areas back to the cities, covering them once again with dense clouds of pollutants.

Japan creates first artificial crater on asteroid

Japan's Hayabusa2 mission aims to shed light on how the solar system evolved
Japan's Hayabusa2 mission aims to shed light on how the solar system evolved
Japanese scientists have succeeded in creating what they called the first-ever artificial crater on an asteroid, a step towards shedding light on how the solar system evolved, the country's space agency said Tcomes after the Hayabusa2 probe fired an explosive device at the Ryugu asteroid early this month to blast a  in the surface and scoop up material, aiming to reveal more about the  on Earth.
Yuichi Tsuda, Hayabusa2 project manager at the Japanese  (JAXA), told reporters they confirmed the crater from images captured by the probe located 1,700 metres (5,500 feet) from the asteroid's surface.
"Creating an artificial crater with an impactor and observing it in detail afterwards is a world-first attempt," Tsuda said.
"This is a big success."
NASA's Deep Impact probe succeeded in creating an artificial crater on a comet in 2005, but only for observation purposes.
Masahiko Arakawa, a Kobe University professor involved in the project, said it was "the best day of his life".
"We can see such a big hole a lot more clearly than expected," he said, adding the images showed a crater 10 metres in diameter.
JAXA scientists had previously predicted that the crater could be as large as 10 metres in diameter if the surface was sandy, or three metres if rocky.
"The surface is filled with boulders but yet we created a crater this big. This could mean there's a scientific mechanism we don't know or something special about Ryugu's materials," the professor said.
The aim of blasting the crater on Ryugu is to throw up "fresh" material from under the asteroid's surface that could shed light on the early stages of the .
The asteroid is thought to contain relatively large amounts of organic matter and water from some 4.6 billion years ago when the solar system was born.
In February, Hayabusa2 touched down briefly on Ryugu and fired a bullet into the surface to puff up dust for collection, before blasting back to its holding position.
The mission, with a price tag of around 30 billion yen ($270 million), was launched in December 2014 and is scheduled to return to Earth with its samples in 2020.
Photos of Ryugu—which means "Dragon Palace" in Japanese and refers to a castle at the bottom of the ocean in an ancient Japanese tale—show the asteroid has a rough  full of boulders.

© 2019 AFP