Tuesday, February 27, 2024

Boeing review finds 'disconnect' on safety

Getty Images

A new report for the US government has raised serious concerns about Boeing's safety management systems, adding to the scrutiny facing the US plane maker.

The review found a "disconnect" between senior management and regular staff, and signs that safety-related messages and behaviours were not effectively implemented across the company.

The report was ordered after crashes involving Boeing planes in 2018 and 2019.

Boeing pledged to review the findings.

"We've taken important steps to foster a safety culture that empowers and encourages all employees to share their voice. But there is more work to do," the company said.

"We will carefully review the panel's assessment and learn from their findings, as we continue our comprehensive efforts to improve our safety and quality programs."

The company, one of two major global plane makers, has been under added pressure since last month, when a section of one of its passenger jets blew off in mid-air, forcing an emergency landing.

The incident, which narrowly avoided serious harm, revived questions about Boeing's manufacturing processes, years after the 2018 and 2019 accidents, which killed 346 people and led to accusations that the company had put profits before safety as it produced its planes.

The panel of experts, which was convened after the earlier crashes, said Boeing had taken steps to improve, but that it saw indications of "gaps in Boeing's safety journey".

It said some Boeing staff were hesitant to report problems and worried about retaliation because of how the reporting process was set up.

Boeing also did not have a clear system for reporting problems and tracking how those concerns were resolved, it said.Boeing 737 Max boss out after blowout
Ryanair warns of 10% fare rise as new planes delayed

The Federal Aviation Administration (FAA) said it would also review the findings.

The agency is currently investigating Boeing's manufacturing processes, triggered by the 5 January blowout. It has barred the company from expanding production of its popular 737 Max planes while the review is under way.

"We will continue to hold Boeing to the highest standard of safety and will work to ensure the company comprehensively addresses these recommendations," the FAA said as it released the report.

The troubles at Boeing are expected to lead to delays delivering new planes to airlines, which Ryanair has said could cause ticket prices to rise. Other airlines have also voiced frustration over the issues.

Earlier this month, Boeing said it was replacing the person in charge of the 737 Max programme and creating a new position of senior vice president for quality.
Draft Canada law would force social media companies to quickly remove harmful content

By David LjunggrenFebruary 26, 2024

Canada's Minister of Justice and Attorney General of Canada Arif Virani speaks about the Online Harms Act during a press conference on Parliament Hill in Ottawa, Ontario, Canada February 26, 2024. 
REUTERS/Blair Gable 

OTTAWA, Feb 26 (Reuters) - Canada on Monday unveiled draft legislation to combat online hate that would force major companies to quickly remove harmful content and boost the penalty for inciting genocide to life in prison.

The Liberal government of Prime Minister Justin Trudeau introduced the bill with the stated aim of protecting children from online predators.

The bill says major social media companies must quickly remove content that sexually victimizes a child as well as intimate content communicated without consent. In both cases, the content would have to be removed within 24 hours, subject to an oversight and review process.

A company found guilty of contravening the law could be fined a maximum of 6% of its gross global revenues, government officials said during a technical briefing.

"There must be consequences for those who violate the rules online ... bad actors target our most vulnerable - our children. They spread vile hate and encourage impressionable people to commit violence," Justice Minister Arif Virani told reporters.

Content providers would have to introduce special protections for children, including parental controls, safe search settings and content warning labels.

The bill covers social media, user-uploaded adult content and live-streaming services but not private and encrypted messaging services.


The bill would also sharply raise the penalties for those found guilty of advocating or promoting genocide. The proposed maximum sentence would be life in prison, up from the five years at present.

Whether all the provisions make it through to the final version is unclear. The bill must be studied by a parliamentary committee and then the upper Senate chamber, both of which can demand changes.

Other nations are moving to shield children from danger on the internet. Last October, Britain's new Online Safety Law set tougher standards for social media platforms.

Canadian government ties with major internet companies are strained over Ottawa's demand that they pay Canadian news publishers for their content.Report this ad
Alphabet's Google agreed last November to pay C$100 million ($74.05 million) annually to publishers while Meta decided to block news on Facebook and Instagram in Canada.

A Meta spokesperson said the company looks forward to collaborating with lawmakers and industry peers "on our long-standing priority to keep Canadians safe."

A spokeswoman for Google said the company was unlikely to respond on Monday.

($1 = 1.3505 Canadian dollars)


Reporting by David Ljunggren; Editing by Andrea Ricci, Sandra Maler and Richard Chang
British report confirms that air strikes on Yemen are ineffective & counterproductive










British report confirms that air strikes on Yemen are ineffective & counterproductive

[26/February/2024]


LONDON February 26. 2024 (Saba) - The British website "UNHERD" confirmed that "the US-UK attacks on Yemen were counterproductive," and were "provoking a hornet's nest."

The site said in a statistical report published on Monday that the US-British attacks, which began on January 12, raised the rate of Yemeni attacks in the Red Sea from 0.38 per day before that date, to 0.53 after.

In addition to their lack of effect, the attacks provide U.S. adversaries, primarily China and Iran, with intelligence on Western naval defense systems that could be used in any future conflict, raising serious questions about the wisdom of military action.

The site asserted that the Yemenis have already achieved their goal of imposing an effective naval blockade in the region. He attributed the reason why Western leaders continue to carry out these strikes, despite their opposite effect, to what he called the principle of "do something."

The "do something" principle results from a weak leadership class feeling the need to act when an enemy or competitor engages in provocation, even if such actions would be counterproductive. Weak leaders are unable to make difficult decisions based on evidence and logic, and instead attack — albeit ineffectively — so that it seems as if they are addressing the problem.

The publication of the report comes in conjunction with the launch of five new air raids by US-UK warplanes today on the Ras Issa area of Al-Salif district, northwest of Hodeidah city.

E.M

Embed-Chart-World-Economy,


Israel's strikes in Lebanon fit a pattern 'of a slow and steady escalation'

Issued on: 26/02/2024 - 
01:25 Andrew Hilliar © France 24
Video by :Andrew HILLIAR

Israel's strikes in Lebanon fit a pattern "of a slow and steady escalation", said Andrew Hilliar, FRANCE24's correspondent in Jerusalem. It began in October, with "tit for tat squirmishes between Hezbollah militants and the Israeli Defence Forces (IDF), and since then, there has been a steady increase of military and civilian deaths on both sides", said Hilliar.

Blinken meets Iraqi Kurdistan PM in Washington amid regional tension

Meeting comes after talks between Baghdad and Washington over possibility of pulling US anti-ISIS troops out of country


US Secretary of State Antony Blinken, left, meets Masrour Barzani, Prime Minister of Iraq's Kurdistan Regional Government, in Washington. AP



The National
Feb 26, 2024

Secretary of State Antony Blinken met Kurdistan Regional Government Prime Minister Masrour Barzani in Washington on Monday amid rising tension in Iraq over calls for the removal of US anti-ISIS coalition troops.

“The United States has a long partnership with the … Kurdistan Regional Government,” Mr Blinken said before the meeting.

“And it’s a partnership that is cemented first and foremost in shared values, shared interests and also a shared history of sacrifice together.”

The meeting comes after recent talks between Baghdad and Washington over the possibility of pulling US troops out of the country

A US-led coalition has been in Iraq since ISIS swept through the country in 2014.

The group was defeated in 2017, but about 2,500 American troops remain in Iraq in an advise-and-assist capacity.

The Iraqi government has been under increasing internal pressure over the presence of US troops.

“We are very proud to say that we are American allies,” Mr Barzani said.

“We have been through some very difficult times and we are very thankful and we express our gratitude for the support that the US has always given to our people.

"And now we are having some new challenges in the region.”

Mr Barzani has voiced concern over moves in the Iraqi Parliament to force US troops to leave the country, as he believes ISIS is still a significant threat.

“All Iraqi components must realise that the threat of terrorism and its reappearance remains valid,” he told the US charge d'affaires David Burger last week.

Mr Barzani said the “interests, stability and security of all Iraqi regions and components must be taken into consideration”.

Baghdad and Washington have held at least two rounds of talks since last month on ending the US-led international coalition to fight ISIS.

Mr Barzani's visit also comes after US strikes on Iraqi territory in response to attacks by Iran-backed militia groups on American troops.

After an attack on a base in Jordan that killed three American soldiers, the US hit Iran-backed sites in Iraq and Syria on February 2.

Five days later, US forces killed a Kataib Hezbollah commander in Iraq.

Attacks on US troops in the Middle East have risen sharply since the start of the Israel-Gaza war in October.


Artificial Intelligence (AI): Friend or Foe?

In the realm of AI’s vast benefits, there looms a shadow of vulnerability to malicious exploitation. To explore this contrast further, one must acknowledge that with great power comes great responsibility. Just as fire can be a beacon of warmth and progress or a destructive force when misused, AI possesses the same duality.

  • 13 hours ago
  •  
  • February 26, 2024




This Op-Ed is one in a series which is aimed at shedding light on critical global issues that demand urgent attention and address a spectrum of challenges affecting us all, emphasizing the need for collective action and support from international humanitarian organizations. By fostering awareness and encouraging collaboration, we hope to inspire positive change and contribute to a more compassionate and equitable world as we cover the multitude of issues that impact our global community.

The rapid advancement of Artificial Intelligence (AI) represents a double-edged sword. It offers vast opportunities for progress across numerous fields while simultaneously raising concerns about its potential misuse. AI’s ability to learn, reason, and make decisions promises to revolutionize various fields like healthcare, transportation, education, and beyond, enhancing problem-solving and efficiency. 

However, like most game-changing discoveries, there’s a looming shadow cast by the possibility of its exploitation by bad actors for malicious intents. The fear stems from scenarios where AI could be weaponized for cyberattacks, surveillance, or even autonomous weaponry, posing significant threats to privacy, security, societal stability, and existential threats.

AI reshapes sectors, offers vast benefits

AI and machine learning were pivotal in combating COVID-19, aiding in scaling communications, tracking spread, and accelerating research and treatment efforts. For instance, Clevy.io, a French start-up and Amazon Web Services customer launched its chatbot to screen COVID-19 symptoms and offer official information to the public. Utilizing real-time data from the French government and the WHO, the chatbot managed millions of inquiries, covering topics ranging from symptoms to governmental policies.

About 83 percent of executives recognize the capability of science and technology in tackling global health issues, signaling a growing inclination towards AI-powered healthcare. In a groundbreaking development, the University College London has paved the way for brain surgery using artificial intelligence, potentially revolutionizing the field within the next two years. Recognized by the government as a significant advancement, it holds the promise of transforming healthcare in the UK. 

Furthermore, in December 2023, Google introduced MedLM, a set of AI models tailored for healthcare tasks like aiding clinicians in studies and summarizing doctor-patient interactions. It’s now accessible to eligible Google Cloud users in the United States.

Artificial intelligence (AI) is also reshaping various sectors like banking, insurance, law enforcement, transportation, and education. It detects fraud, streamlines procedures, aids investigations, enables autonomous vehicles, deciphers ancient languages, and improves teaching and learning techniques. Moreover, AI supports everyday tasks, saving time and reducing mental strain.

In the realm of AI’s vast benefits, there looms a shadow of vulnerability to malicious exploitation. To explore this contrast further, one must acknowledge that with great power comes great responsibility. Just as fire can be a beacon of warmth and progress or a destructive force when misused, AI possesses the same duality.

Experts taken aback by quick progress of AI

As technology improves, it’s becoming more likely that AI will be able to do many of the tasks currently done by professionals such as lawyers, accountants, teachers, programmers, and journalists. This means that these jobs might change or even become automated in the future. According to a report by investment bank Goldman Sachs, AI could replace 300 million jobs globally and increase the annual value of goods and services by seven percent. It also suggests that up to a quarter of tasks in the US and Europe could be automated by AI. This projection underscores the transformative impact AI could have on the labor market.

Many experts have been taken aback by the quick progress of AI development. Some prominent figures, including Elon Musk and Apple co-founder Steve Wozniak were among 1,300 signatories of an open letter calling for a six-month pause on training AI systems to address the dangers arising from its rapid advancement. Furthermore, in an October report, the UK government warned that AI could aid hackers in cyberattacks and potentially assist terrorists in planning biological or chemical attacks. Artificial intelligence, warned by experts including the leaders of Open AI and Google DeepMind, could potentially result in humanity’s extinction. Dozens have supported a statement published on the webpage of the Centre for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Artificial intelligence, elections, deep fakes, and AI bombs

In addition, the swift evolution of artificial intelligence may be disrupting democratic processes like elections. Generative AI, capable of creating convincing yet fake content, particularly deepfake videos, blurs the line between fact and fiction in politics. This technological capability poses significant risks to the integrity of democratic systems worldwide. Gary Marcus, a professor at New York University, warns “The biggest immediate risk is the threat to democracy…there are a lot of elections around the world in 2024, and the chance that none of them will be swung by deep fakes and things like that is almost zero.”

The CEO of OpenAI, Sam Altman, emphasized the importance of addressing “very subtle societal misalignments” that could lead AI systems to cause significant harm, rather than focusing solely on scenarios like “killer robots walking on the street.” He also referenced the International Atomic Energy Agency (IAEA) as a model for international cooperation in overseeing potentially dangerous technologies like nuclear power.

A survey from Stanford University states that more than one-third of researchers believe artificial intelligence (AI) could cause a “nuclear-level catastrophe”. This highlights the widespread concerns within the field about the dangers posed by AI technology advancing so rapidly. The results of the survey contribute to the increasing demand for regulations on artificial intelligence. These calls have been sparked by various controversies, like incidents where chatbots were linked to suicides and the creation of deepfake videos showing Ukrainian President Volodymyr Zelenskyy supposedly surrendering to Russian forces.

In “AI and the Bomb,” published this year, James Johnson of the University of Aberdeen envisions a 2025 accidental nuclear war in the East China Sea, triggered by AI-powered intelligence from both the U.S. and Chinese sides.

The proliferation of autonomous weapons

The worst that AI can do, and is already doing, includes integrating into military systems, notably in autonomous weapons, raising ethical, legal, and security concerns. These “killer robots” risk unintended consequences, loss of human control, misidentification, and targeting errors, potentially escalating conflicts. 

In November 2020, the prominent Iranian nuclear scientist Mohsen Fakhrizadeh was killed in an attack involving a remote-controlled machine gun believed to be used by Israel. Reports suggest that the weapon utilized artificial intelligence to target and carry out the assassination.

The proliferation of autonomous weapons could destabilize global security dynamics and trigger arms race due to the absence of clear regulations and international norms governing their use in warfare. The lack of a shared framework poses risks, evident in conflicts like the Ukraine war and Gaza. The Ukraine frontline has witnessed a surge in unmanned aerial vehicles equipped with AI-powered targeting systems, enabling near-instantaneous destruction of military assets. In Gaza, AI reshaped warfare after Hamas disrupted Israeli surveillance. Israel responded with “the Gospel,” an AI targeting platform, increasing target strikes but raising civilian concerns.

Countries worldwide are urging urgent regulation of AI due to risks and concerns over its serious consequences. Therefore, concerted efforts are needed to establish clear guidelines and standards to ensure responsible and ethical deployment of AI systems.

Critical safety measures for artificial intelligence: we cannot solve problems in silos

In November, top AI developers, meeting at Britain’s inaugural global AI Safety Summit, pledged to collaborate with governments in testing emerging AI models before deployment to mitigate risks. Additionally, the U.S., Britain, and over a dozen other nations introduced a non-binding agreement outlining broad suggestions for AI safety, including monitoring for misuse, safeguarding data integrity, and vetting software providers.

The US plans to launch an AI safety institute to assess risks, while President Biden’s executive order requires developers of risky AI systems to share safety test results. In the EU, lawmakers ratified a provisional agreement on AI rules, paving the way for the world’s first legislation. EU countries have also endorsed the AI Act to regulate government use of AI in surveillance and AI systems.

AI regulation is crucial for ensuring the ethical and responsible development, deployment, and use of artificial intelligence technologies. Without regulation, there is a risk of AI systems being developed and utilized in ways that harm individuals, society, and the environment.

However, confronting the dark and nefarious challenges which come with AI, country by country, will be as futile as when we attempted to overcome the existential, game-changing virus COVID-19 and the accompanying global pandemic of 2020-2023. There is a plethora of global issues such as climate change, world hunger, child labour, child marriages, waste, immigration, refugee crises, and terrorism, just to name a few, which we will fail to solve if we continue trying to do this in nation-by-nation silos.



Britain's Royal Mint honours George Michael with collectible coins
By Euronews with agencies
Published on 26/02/2024 - 

The coins feature Michael wearing his trademark aviator sunglasses and cross-shaped earrings seen in the video for his 1987 hit single, Faith.

He was one of the world’s best-selling artists of all time, now the late British singer-songwriter, George Michael, is being honoured with three collectible coins.

Produced by the Royal Mint as part of its Music Legends series, they feature Michael wearing his trademark aviator sunglasses and cross-shaped earrings worn in the video for his 1987 hit single, Faith.

Officially approved by his estate, it is the latest in a series of releases recognising stars including David Bowie, Elton John and the band Queen.

In a statement, George Michael Entertainment said that on behalf of the star, they were deeply honoured by the Royal Mint’s tribute to him.

“He would have been enormously proud and genuinely touched that a national institution should have decided to pay tribute to his memory in this way," it said.


Poorly planned tree planting schemes threatening ecosystems

A new study says that poorly planned tree planting in schemes in Africa is threatening vital grassland eco-systems

Savannah or forest? That depends upon whom you ask. Image: Stuart Butler/Geographical

26 February 2024

Restoring 100 million hectares of degraded and deforested land in Africa by 2030 is a highly ambitious target. There’s no doubt that the goals of The African Forest Landscape Restoration Initiative are lofty and much-needed. But, is this, and other reforestation projects, being done in the right way?

A new study published this month in the journal Science puts this into doubt by claiming that an area of Africa the size of France is under threat from inappropriate forest restoration projects. The study authors claim that 52 per cent of tree-planting projects in Africa are taking place in natural savannah eco-systems rather than degraded woodland and that this risks destroying these vital grasslands.

One of the main reasons this is happening is due to the definition of forest land by the UN Food and Agriculture Organization, which defines forests as areas of land spanning more than 0.5 hectares with trees higher than five metres and a canopy cover of at least 10 per cent. Many African savannah eco-systems fall squarely within this definition. By planting trees in predominately grassland ecosystems, the entire habitat is changed, which will negatively impact many of the species of plants and animals already living there. To make matters worse, the study authors go on to say that almost 60 per cent of the trees being planted are non-native species, which runs the serious risk of introducing invasive species.



The study focused its attention on The African Forest Landscape Restoration Initiative because of the huge scale and ambition of this country-led project, which is currently working with 34 African nations, including almost all the countries in sub-Saharan Africa. However, the report authors aren’t claiming that this is the only misguided reforestation project and that the issues raised with the analysis of this initiative are broadly representative of the situation elsewhere (for example, the All India Tree Plantation Campaign), although Africa is the continent with the greatest cover of savannah and grasslands.

Dr Nicola Stevens, a co-author of the paper and a researcher of African environments at the University of Oxford, explained that The African Forest Landscape Restoration Initiative’s problem of tree planting in the wrong environment might be one of urgency to get projects off the ground in order to meet the 2030 timeline, ‘The urgency of implementing large-scale tree planting is prompting funding of inadequately assessed projects that will most likely have negligible sequestration benefits and cause potential social and ecological harm.’

This statement doesn’t mean that the report authors are against the reforestation schemes, as Kate Parr, one of the other report authors and a professor of tropical ecology at the University of Liverpool makes clear. She said: ‘Restoration of ecosystems is needed and important, but it must be done in a way that is appropriate to each system. Non-forest systems such as savannahs are misclassified as forest and therefore considered in need of restoration with trees. There is an urgent need to revise definitions so that savannahs are not confused with forest because increasing trees is a threat to the integrity and persistence of savannahs and grasslands.’