Thursday, March 02, 2023

 Flag of Australia.

Doing Washington’s Bidding: Australia’s Treatment Of Daniel Duggan – OpEd

By 

The increasingly shabby treatment of former US marine Daniel Edmund Duggan by Australian authorities in the service of their US masters has again shown that the Australian passport is not quite worth the material it’s printed on.

In January this year, Sydney’s Downing Centre Local Court heard that Australian Attorney-General Mark Dreyfus accepted a request from the US before Christmas to extradite Duggan.  Duggan is no longer an Australian citizen, but Canberra has often regarded this as irrelevant when it comes to the US-Australian alliance.

In a 2017 indictment unsealed on December 9, Duggan is accused by prosecutors of using his expertise to train Chinese fighter pilots to land on aircraft carriers along with eight co-conspirators working at a South African flight school.  It is also alleged that the US State Department warned him to apply for written authorisation to train a foreign air force in 2008, which is a requirement of the International Traffic in Arms Regulations (ITAR).  The allegation here is that he went ahead without securing authorisation, thereby breaching trafficking and arms control laws between 2009 and 2012.

Duggan has been held since October. In the finest traditions of Australian justice, he is being confined in conditions that suggest presumed guilt.  His lawyer, Dennis Miralis, has stated at various points with some exasperation that his client is “presumed innocent under US law”.  Duggan’s wife, Saffrine, insists that her husband is “a victim of the United States government’s political dispute with China.”  

This presumption has also been sorely tested by Duggan’s detention in a two-by-four-metre cell at the Silverwater jail, which also houses convicted terrorists.  Miralis can only assume that the New South Wales Department of Directions has been all too willing to follow instructions delivered from on high.

Earlier this month lawyers for Duggan made a submission to the UN Human Rights Committee challenging these conditions.  Their submission argues that the authorities have failed to protect Duggan from “inhumane and degrading” treatment, failed to segregate him from convicted inmates, violated his right to adequate facilities to enable him to prepare his legal defence, and denied his right to confidential communications. 

The submission also references the assessment of a clinical psychologist who visited Duggan in the Silverwater prison.  “The psychologist described Mr Duggan’s conditions as ‘extreme’ and ‘inhumane’.  He advised that Mr Duggan was at risk of a major depressive disorder.”  Another condition causing him even further discomfort is benign prostatic hyperplasia.

Regarded as nothing more than contingent paperwork, citizenship is feeble in prosecutions of Australians by other allied countries.  To the contrary, Canberra has often aided and abetted the undermining of citizens’ rights with a snotty “good riddance” attitude, glad to be rid of supposedly bad apples in the cart.  

During the poorly conceived “War on Terror”, a tellingly ghastly response to the attacks on the United States on September 11, 2001, Australian citizens found themselves captured, rendered and left to decay in detention.  Such names should forever be taught in schools.  They include the Egyptian-Australian national Mamdouh Habib, and David Hicks.

Habib’s arrest in October 2001 in Pakistan and subsequent detention for three years on suspicion of having prior knowledge of the September 11 terrorist attacks, was a fantasy encouraged by both US and Australian personnel.  Despite the US expressing the view in January 2005 that it would not lay charges against Habib, the Australian Attorney-General and Minister for Foreign Affairs were still adamant that Habib had prior knowledge of the attacks, had spent time in Afghanistan, and trained with al-Qaida.  

Hicks was sent to the purgatory of Guantánamo Bay in January 2002 after being captured in Afghanistan by forces of the Northern Alliance.  He then became something of a judicial guinea pig, the victim of a military commission system initially deemed by the US Supreme Court to be unconstitutional, unfair and illegal.

What was particularly striking here were the instances of premature adjudication and Australian calls that the US authorities do all they could to try and convict Hicks.  Prime Minister John Howard worried that Hicks, were he not to face a military commission trial in the US, would escape charges in Australia.  He did not “regard that as a satisfactory outcome, given the severity of the allegations that have been made against him.”   

Foreign Minister Alexander Downer even dared to claim that Hicks be grateful for not having a longer spell in US captivity.  “He would have been there for years if it hadn’t been for our intervention.”  

The subsequent Plea Agreement reached in March 2007, under which Hicks pleaded guilty for “providing material support for terrorism”, saw him receive a seven-year sentence, most of it suspended.  The remaining seven months of the sentence was served in Australia, which the UN Human Rights Committee held to be a “disproportionate restriction of the right to liberty” in violation of the International Covenant on Civil and Political Rights.  The HRC also noted that Hicks “had no other choice than to accept the Plea Agreement that was put to him” were he to escape the human rights violations he faced in Camp X-Ray.

On February 18, 2015, the United States Court of Military Commission Review set aside Hicks’ guilty plea and sentence.  The judges noted that the charge of providing material support for terrorism should be vacated, given the Supreme Court ruling in 2014 that being tried for such an offence by a military commission was an “ex post facto violation”.

To crown this appalling resume of achievements is the Australian government’s grossly feeble response to Julian Assange’s continued persecution at the hands of the US Department of Justice in the United Kingdom.  Facing a preposterously broad application of the Extradition Act of 1917, thereby imperilling national security journalism, Australian calls to drop the case have been weak and lukewarm at best.  The trend was set by Labor Prime Minister Julia Gillard, whose response to Cablegate in 2010 was to presume Assange was guilty for having breached some regulation, despite failing to identify a single law to that effect.

Given this inglorious record, the Duggan case has an all too familiar feel to it.  The training of Chinese pilots by veteran personnel from a Western country would hardly have raised a murmur when relations between Washington and Beijing were less acrimonious.  Hicks also found himself in the historical crosshairs, foolishly wishing to throw in his lot with forces that were once the anti-communist darlings of the US intelligence community. 

The question here is what Australian citizens can do when providing services for foreign countries.  Serving in ultra-nationalist Ukrainian regiments, or moonlighting in the Israeli Defence Force, is unlikely to land you in trouble.  But proffering aeronautical expertise in a private capacity while earning some cash on the side?  How frightful.

If the fevered assessments from the Australian Security Intelligence Organisation are anything to go by, the only thing missing in Duggan’s extradition is the welcome card for the US DOJ.  ASIO Chief Mike Burgess, in his annual threat assessment, was eager to justify his agency’s bloated budget.  “More hostile foreign intelligence services, more spies, more targeting, more harm, more ASIO investigations, more ASIO disruptions. From where I sit, it feels like hand-to-hand combat.”  

Burgess shows a striking inability to understand why much of this is overegging paranoia.  Academics, business figures and bureaucrats, in suggesting he ease up on ASIO’s foreign interference and espionage operations, could only offer him “flimsy” justifications, such as “All countries spy on each other” and “We were going to make the information public anyway”.

Facing such a jaundiced worldview, Duggan’s future is bleak.  And now that Australia has willingly committed itself to Armageddon in lock-step with US forces in any conflict with the PRC, Canberra is doing everything it can to be an efficient detainer for its enormous and not always considerate friend.


Binoy Kampmark

Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He lectures at RMIT University, Melbourne. Email: bkampmark@gmail.com
Covid-19 transmissible between dogs: Study
Reverse zoonosis refers to an infection or disease that is transmissible from humans to animals. 
ST PHOTO: LIM YAOHUI

SEOUL - A South Korean research team has confirmed on Wednesday that some Covid-19 variants, including Delta and Omicron, can be transmitted between dogs.

Although there have been many reports on the transmission of the coronavirus from humans to dogs, this is the first study that proves transmission of the virus among dogs.

The study, done by a joint research team, is led by Professor Song Dae-sub of Seoul National University’s College of Veterinary Medicine and researcher Yoo Kwang-soo of Jeonbuk National University.

The research team infected a beagle with the Delta and Omicron variants by introducing the virus through the dog’s nose. After 24 hours, they put an uninfected dog in the same cage.

Researchers did not detect any visible symptoms in the infected and uninfected dog, after observing them for seven days.

They only detected symptoms of viral pneumonia, a common symptom of Covid-19, when they analysed the tissue of the dogs’ lungs.

The team also found that proliferative viruses can be spread through dogs’ nasal discharge

The study suggested that human coronaviruses such as Covid-19 and Mers can be transmitted to other species.

The research team suggested that pet vaccinations should be actively considered to prevent animal-to-human infection and the emergence of another variant from pets.

“If infection between species and individuals is repeated, the possibility of another variant increases,” said Prof Song.

“It is time to consider the use of animal vaccines to prevent the reverse zoonosis of pets.”

Reverse zoonosis refers to an infection or disease that is transmissible from humans to animals.

The study was funded by the Korea Centres for Disease Control and Prevention, and the paper was published in Emerging Infected Disorders, a medical journal published by the Centres for Disease Control and Prevention of the United States. 



THE KOREA HERALD/ASIA NEWS NETWORK




 Computer Artificial Intelligence Ai Dall-E Chatgpt

Generative AI And Large Language Models: The AI Gold Rush – Analysis

By 

By Dr. Sanur Sharma*

Summary

Generative Artificial Intelligence (AI) models have a vast application landscape and use cases as they can help enterprises automate intelligence through a knowledge base across multiple domains. These models can help scale up innovation in AI development across sectors. Negative use cases of generative AI include disinformation spread and influence operations. Organisations and governments are attempting to address these concerns through practices like responsible data collection, ethical principles for AI, and algorithmic transparency.

The democratisation of Artificial Intelligence (AI) with new technology platforms is gaining significant importance, with tech giants like Google, Microsoft and Baidu challenging each other in the business of Generative AI. The Large Language Models (LLMs) and Generative AI models like OpenAI’s ChatGPT, which has been put out in the public domain, have created a stir online and within communities about the possibilities of AI replacing humans. The expansion of LLMs has gained momentum in the past two years with the introduction of AI-based chatbots and conversational agents taking the online marketplace. 

Their ability to handle diverse tasks like answering complex questions, generating text, sounds, and images, translating languages, summarising documents, and writing highly accurate computer programmes has brought them into the public eye. These models can synthesise information from billions of words from the web and other sources and give a sense of fluid interaction. Amidst the hype around these models, the less debated issue is the possibility of these tools generating falsehoods, biases, and other ethical considerations. 

Generative AI models 

Generative AI systems refer to the class of machine learning where the system is trained to generate new data or content like audio, video, text, images, art, music, or an entire virtual world of things. These models study the statistical patterns and structures from the training data and discover new information on different samples that resembles the original data. In addition, these models are trained on humongous amounts of data; they seem creative when they produce a variety of unexpected outputs that make them look genuine. 

Various Generative AI models include Variational Autoencoders, Auto Aggressive Models, and Generative Adversarial Networks. Generative AI models have varied applications today, from image generation to music creation, data augmentation and more. The area gaining the most significance today is the text generation tools, also known as large language models. Various leading companies and labs are doing R&D in this field. 

The LLM originated in 2017, and one of the first such models was Bidirectional Encoder Representation from Transformer (BERT) and Generative Pre-trained Transformer (GPT) by Google and OpenAI, which were open-sourced in the same year. Following the idea, many such models originated, like OpenAI’s GPT2, PaLM 540B, Megatron 530B, GitHub’s Copilot, Stable Diffusion, and InstructGPT.1 More recently, next-generation tools like ChatGPT, DALL-E-2 and Google’s Language Model for Dialogue Applications (LaMDA) have become the internet sensation. These LLMs are trained on large amounts of data (petabytes) and are used for zero-shot or few-shot scenarios where little domain knowledge is available, so they can start generating data based upon just a few prompts. For instance, OpenAI’s GPT-3 is a 175 billion parameter model and can generate text and code from a very short prompt.2 

The Generative AI models have a vast application landscape and use cases. Therefore, these models can help enterprises automate intelligence through a knowledge base across multiple domains shown in Figure 1. In addition, these models have the capability to scale up innovation in AI development across sectors.

Source: By Author
Source: By Author

ChatGPT

ChatGPT is a generative AI based on transformer architecture that generates natural language responses to the given prompt. It is a type of autoregressive model that produces a sequence of text based on the previous tokens in sequence. ChatGPT has revolutionised people’s interaction with technology so that it seems as if one person is talking to another person.

It was first introduced in 2018 by OpenAI and is based upon InstructGPT with changes in data collection setup, and in November 2022, it was made public for user feedback. Mesmerised users posted on social media what this chatbot can do—like producing code, writing essays, poems, speeches, and letters—even creating fear among content writers of losing their jobs. However, the full scope of these tools is yet to be determined as there are risks associated with this technology that need to be addressed. 

GPT tools have been in the market before and are used for various use cases. These models have gone through a series of improvements over time. Figure 2 presents the timeline of OpenAI’s GPT models.

Source: By Author
Source: By Author

ChatGPT has a broad range of applications like expert conversational agents, language translation and text summarisation, to state a few. It can also learn and adapt to new contexts and situations by analysing text and updating its algorithm based on new data. This continuous analysis makes it more accurate in generating responses. It is based on reinforcement learning with human feedback (Figure 3).3 The model is trained using supervised fine-tuning with human AI trainers providing conversations. The reward model works on comparison data built from the conversations of AI trainers with the chatbot and the ranking of the sampled alternative messages. The model has been fine-tuned by using Proximal Policy Optimisation. ChatGPT is the fine-tuned model of the GPT-3.5 series that completed its training in 2022. Both these models have been trained on Azure AI supercomputing Infrastructure.4

Source: Adapted by Author from OpenAI.5

One of the key benefits of ChatGPT is its power to process and learn from interactions with users, understanding the context and nuances of the language and coming out with meaningful and accurate responses. It can constantly improve itself through conversations and building its extensive database. Therefore, one can expect more remarkable capabilities from this model in the future. Furthermore, it is modelled on deep learning architecture, which allows it to achieve a higher level of accuracy in content creation.

The training data of ChatGPT is collected from vast data sources like web pages, books, scientific articles, and corpora of text from other sources. The model is trained on 570 GB of data, about 300 billion words.6 The cut-off for this data collection was in 2021,7 and specific training data was used, which might impact the model’s performance in terms of generating relevant responses that are contextually appropriate. This implies that the model lacks real-time data and analysis and information post-2021. In addition, behind this colossal dataset and training, there are some issues that ChatGPT still needs to address, which includes an improved response mechanism in terms of additional layers in the model for verification and validation to present more meaningful information.

Race to Build Generative AI and LLMs

The overwhelming response to models like ChatGPT, LaMDA and DALL-E-2 has stirred the industry and started a race amongst the tech giants to build such models as a significant part of the search engine business. 

Google’s LaMDA was developed in 2020 and is based on Transformer, a neural network architecture8 that gained popularity in 2022 when an engineer from Google went public and termed it a sentient system. The much-hyped generative AI Chatbot is said to have been considered more capable than ChatGPT, but until it is publicly released, it is difficult to prove the same. On 6 February, Google announced another AI chatbot ‘Bard’, a conversational AI as a rival to OpenAI’s ChatGPT.9 It is said to be capable of responding to human queries and synthesising information like ChatGPT and is a lightweight version of Google’s LaMDA. However, within days of the launch, the flaw in Bard was noticed where the tool made a factual error in one of its promotional videos. Following this, Google’s share dropped by 9 per cent and the company lost around US$ 100 billion in market value. Google’s Vice President of Search Prabhakar Raghavan asked the trainers and executives to rewrite Bard’s incorrect responses. Google is also investing US$ 300 million in Anthropic, an AI startup to work in the field of Generative AI.10 Some other generative AI models by Google are MUM, PaLM and MusicLM. 

Microsoft is also said to be investing billions of dollars in AI and revamping its search engine Bing and Edge web browser with AI capabilities.11 It is working in collaboration with OpenAI and is looking at integrating ChatGPT into Bing and further commercialise its Azure OpenAI service with several AI models like GPT3.5, Codex and DALL-E and the soon to be released GPT4.12 On 7 February, Microsoft launched the AI-powered Bing search engine and Edge browser for preview as an AI co-pilot for the web to get more people to benefit from search and web. The users asked questions to Bing, and it gave direct answers in chat and not with links to websites. The users with access to this feature were curious to have prolonged interactions with the search engine, which then got deranged and started expressing emotions of love and anger. 

Following this, Microsoft put a cap of five questions per session and 50 questions per day, as it was observed that only 1 per cent of users have more than 50 questions in a day.13 The company stated that the tool needed training to be more reliable. In future, it will introduce a toggle mode allowing users to select the level of creativity they wish to have in their responses.14 In the past, DALL-E2, a text-to-image generator, faced a similar glitch where the tool was said to create its own language and struggled to generate coherent images of text.

The big tech companies investing in Generative AI tools indicate the promise these tools present and the profound benefits users experience when they come across meaningful writings and content that seems to incur human annotation. These tools will bring ease in doing business with multiple use cases in various sectors like devising personalised marketing, social media and sales content; code generation, documentation and content creation in IT; pulling out data, summarising and drafting of legal documents; enabling R&D in drug discovery; providing self-serve functions in Human Resources (HR) and assisting in content creation for questionnaires and interviews; employee optimisation through automated responses, text translation, crafting presentations and synthesising information from video meetings; and creating assistants for specific businesses.15 

In the future, these tools are expected to generate their own data by bootstrapping their own intelligence and fine-tuning it for better performance. All these tools are based on an autoregressive transformer model and are dense, which means that they use all the parameters (millions/billions) to produce a response. The research in this aspect is now moving towards designing models that will only use the relevant parameters to generate a response, making them less computationally difficult.16 

The race among the tech giants to come out with these tools is like the innovators’ dilemma to rule the search engine business. The reasons behind this hurry to come out with such tools could be either to take the lead in this business and vision for the future or to collect more data from human users and keep training their models to perform better. Nevertheless, adopting these tools will be part of the businesses soon, but criticism over their shortcomings will also follow.

Implications of Generative AI Models

The challenge with Generative AI models is to ensure that the generated data is of good quality, balanced, free from potential biases and a good representative of the original data. These models present a risk of overfitting and generation of unrealistic data, which raises ethical concerns related to using such models. Last year, Google’s chatbot LaMDA was claimed as sentient by their engineers,17 and OpenAI’s DALLE-2 talking gibberish was said to have created its own language.

Another issue with these Generative AI systems is that cybercriminals have started to use these tools to develop malicious codes and tools. According to Check Point Research (CPR), major underground hacking communities are already using OpenAI to create spear-phishing emails, infostealers, encryption tools, and other fraud activities. The dark web is being used by the hackers for posting the benefits of malware and for sharing code (like for generating stealers) with the help of tools like ChatGPT.18 

One of the negative use cases of Generative AI is spreading disinformation, shaping public perception and influence operations. These language models have the capability to automate the creation of misleading text, audio and videos to spread propaganda by various malicious actors. A report by CSET and OpenAI discusses the three dimensions (actors, behaviours and content) where language models and Generative AI can be used for targeted influence operations.19 Considering the pace of development in this field, these models are likely to become more usable, efficient and cost-effective with time, making it easier for the threat actors to use them for malicious activities.

Currently, organisations and governments/countries are attempting to address these concerns through practices like responsible data collection, ethical principles for AI, and algorithmic transparency. Moreover, the legal implications of using AI models are also under consideration, specifically on regulations and guidelines for various sectors like healthcare, finance, and defence—where data privacy, security, and regulation on the use of AI for decision-making are of utmost importance.

At present, there are no regulations that apply to LLMs or AI language models in particular. However, there is a need to spread awareness amongst various stakeholders and civil society to consider the ethical and legal implications of these technologies and ensure that appropriate frameworks are implemented for its responsible use. Countries are striving towards establishing AI strategies and data protection laws, focusing on establishing regulations on AI governance and its ethical use. A few such initiatives by OECD are its ethical principles on AI and support to other countries and organisations in establishing AI principles and best practices.20 

The European Union’s (EU) data protection law is another act that closely observes the privacy issues related to data and algorithms.21 EU is also moving fast with its draft of the AI act that intends to govern all AI use cases.22 The US is also working towards AI governance and has come out with various policies and principles like the 2023 US National Defense Authorization Act (NDAA), which has proposed provisions for governing and deploying AI capabilities. Sections 7224 and 7226 relate to principles and policies for the use of AI in government and rapid deployment and scaling of applied AI for modernisation activities with use cases.23 The US National Institute of Standards and Technology (NIST) has also issued Version 1.0 of its AI Risk Management Framework (AIRMF 1.0), which is a multi-tool for organisations to design and deploy trustworthy AI.24 Recently, China has come out with a series of regulations specific to different types of algorithms and AI capabilities, including relating to AI algorithms for Deepfakes.25

Conclusion

Generative AI systems have the potential to revolutionise the way we work and live. Its capability to cater to diverse audiences with meaningful information in a contextualised manner and provide tailor-made responses has brought a significant breakthrough in technology and how we use it. As these tech companies dive into the foray of these AI applications and use cases, it is imperative to study the implications of this technology and how it affects society at large. The regulation of AI systems is still in its infancy, and countries looking at building their own policies and regulations can learn from the positives and negatives of the two different models being implemented by the EU and China. 

The next wave of innovation in Generative AI and LLMs will bring new use cases and applications in other domains with better reliability mechanisms. These AI tools certainly have limitless potential, but at the same time, they should not be totally relied upon as a replacement for human decision-making as they lack emotional intelligence and human intuition and struggle with language nuances and context, with the risk of biases being introduced at any point in their structural mechanisms. There is no silver bullet solution with Generative AI systems, and hence coordination among stakeholders, civil society, government and other institutions is needed to manage and control the risks associated with this technology.

Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrrikar IDSA or of the Government of India.

*About the author: Dr Sanur Sharma is Associate Fellow at Manohar Parrikar Institute for Defence Studies and Analyses.

Source: This article was published by Manohar Parrrikar IDSA


Manohar Parrikar Institute for Defence Studies and Analyses (MP-IDSA)

The Manohar Parrikar Institute for Defence Studies and Analyses (MP-IDSA), is a non-partisan, autonomous body dedicated to objective research and policy relevant studies on all aspects of defence and security. Its mission is to promote national and international security through the generation and dissemination of knowledge on defence and security-related issues. The Manohar Parrikar Institute for Defence Studies and Analyses (MP-IDSA) was formerly named The Institute for Defence Studies and Analyses (IDSA).