Tuesday, November 19, 2024

Law and disorder as Thai police station comes under monkey attack

By AFP
November 18, 2024

The human inhabitants of Lopburi have long suffered from a growing and aggressive monkey population - Copyright AFP Abdul Goni

Police in central Thailand said they barricaded themselves into their own station over the weekend, after a menacing mob of 200 escaped monkeys ran riot on the town.

The human inhabitants of Lopburi have long suffered from a growing and aggressive monkey population and authorities have built special enclosures to contain groups of the unruly residents.

But on Saturday around 200 of the primates broke out and rampaged through town, with one posse descending on a local police station.

“We’ve had to make sure doors and windows are closed to prevent them from entering the building for food,” police captain Somchai Seedee told AFP on Monday.

He was concerned the marauders could destroy property including police documents, he added.

Traffic cops and officers on guard duty were being called in to fend off the visitors, the Lopburi police said on Facebook on Sunday.

Around a dozen of the intruders were still perched proudly on the roof of the police station on Monday, photos from local media showed.

Down in the streets, hapless police and local authorities were working to round up rogue individuals, luring them away from residential areas with food.

While Thailand is an overwhelmingly Buddhist nation, it has long assimilated Hindu traditions and lore from its pre-Buddhist era.

As a result monkeys are afforded a special place in Thai hearts thanks to the heroic Hindu monkey god Hanuman, who helped Rama rescue his beloved wife Sita from the clutches of an evil demon king.

Thousands of the fearless primates rule the streets around the Pra Prang Sam Yod temple in the centre of Lopburi.

The town has been laying on an annual feast of fruit for its population of macaques since the late 1980s, part religious tradition and part tourist attraction.

But their growing numbers, vandalism and mob fights have made an uneasy coexistence with their human neighbours almost intolerable.

Lopburi authorities have tried quelling instances of human-macaque clashes with sterilisation and relocation programs.




Future generations may lose jobs due to their typing speed

Traditional keyboarding was required for 63.3 percent of all workers In 2019

By Dr. Tim Sandle
DIGITAL JOURNAL
November 19, 2024

Chinese regulators have scrambled to keep up with the country's voracious appetite for video games, which have been blamed for social ills including online addiction - © AFP/File Mohd RASFAN

As typing becomes less common amongst Gen Z device users, and schools fail to bridge the gap with typing classes, recruiters warn that young workers could enter the market under skilled.

Part of the reason typing skills have dropped is due to the high reliance by younger people on mobile devices. Trends suggest that Gen Z smartphone use rises 82 percent while other devices are used less. This compares to 72 percent for Generation Y/Millenials, 66 percent for Generation X, and 43 percent for Boomers.

This means less time spent on other devices, like laptops and desktops. One skill that is a particular risk to this trend is typing.

Commenting on this trend for Digital Journal is John Michaloudis, a technology expert who runs the online training platform MyExcelOnline: “Typing skills are becoming much rarer amongst Gen Z device users, and this is simply because they spend a lot more of their time using digital keyboards than real ones. A big part of this is down to the death of typing classes in schools, as well.”

“Typing may not be considered a high-level skill and, what’s more, it’s something that others, such as millennials, learned naturally as they got acclimated to digital technology,” adds Michaloudis. “However, that’s no longer the case as much. So, if you don’t have formal training, and you don’t have people learning independently, it’s natural these skills will fade away.”

This does not mean that typing is any less vital in the workplace. According to the U.S. Bureau of Labor Statistics, traditional keyboarding was required for 63.3 percent of all workers. In 2019.

For those seeking to improve their typing skills, they can:

Practice with Online Tools: Utilize platforms like TypingClub or Keybr to practice typing through engaging exercises and track progress.

Touch Typing Programs: Enroll in touch typing courses designed to increase speed and accuracy by teaching typing without looking at the keyboard.

Use Online Typing Games: Websites like “TypeRacer” offer fun ways to enhance accuracy and speed.

Daily Practice: Dedicate a specific amount of time each day to type consistently, focusing on both speed and reducing errors.

Set Achievable Goals: Break down typing improvement into manageable targets, such as increasing words per minute (WPM) by 5 every month. This provides motivation and a clear path to improvement.




U.S. to call for Google to sell Chrome browser: report


By AFP
November 18, 2024

Photo illustration: — © Digital Journal

The U.S. will urge a judge to make Google-parent company Alphabet sell its widely used Chrome browser in a major antitrust crackdown on the internet giant, according to a media report Monday.

Antitrust officials with the US Department of Justice declined to comment on a Bloomberg report that they will ask for a sell-off of Chrome and a shake-up of other aspects of Google’s business in court Wednesday.

Justice officials in October said they would demand that Google make profound changes to how it does business — even considering the possibility of a breakup — after the tech juggernaut was found to be running an illegal monopoly.

The government said in a court filing that it was considering options that included “structural” changes, which could see them asking for a divestment of its smartphone Android operating system or its Chrome browser.

Calling for the breakup of Google would mark a profound change by the US government’s reglators, which have largely left tech giants alone since failing to break up Microsoft two decades ago.

Google dismissed the idea at the time as “radical.”

Adam Kovacevich, chief executive of industry trade group Chamber of Progress, released a statement arguing that what justice officials reportedly want is “fantastical” and defies legal standards, instead calling for narrowly tailored remedies.


Google Chrome is the most popular internet browser in the world, making the internet giant a part of everyday life for people around the globe – Copyright AFP KIMIHIRO HOSHINO

Determining how to address Google’s wrongs is the next stage of a landmark antitrust trial that saw the company in August ruled a monopoly by US District Court Judge Amit Mehta.

Requiring Google to make its search data available to rivals was also on the table.

Regardless of Judge Mehta’s eventual decision, Google is expected to appeal the ruling, potentially prolonging the process for years and possibly reaching the US Supreme Court.

The trial, which concluded last year, scrutinized Google’s confidential agreements with smartphone manufacturers, including Apple.

These deals involve substantial payments to secure Google’s search engine as the default option on browsers, iPhones and other devices.

The judge determined that this arrangement provided Google with unparalleled access to user data, enabling it to develop its search engine into a globally dominant platform.

From this position, Google expanded its tech empire to include the Chrome browser, Maps and the Android smartphone operating system.

According to the judgment, Google controlled 90 percent of the US online search market in 2020, with an even higher share, 95 percent, on mobile devices.

Remedies being sought will include imposing measures curbing Google artificial intelligence from tapping into website data and barring the Android mobile operating system from being bundled with the company’s other offerings, according to the report.
S.Africa offers a lesson on how not to shut down a coal plant


By AFP
November 18, 2024

Before it turned off the switches in October 2022, the plant fed 121 megawatts into South Africa's grid - Copyright AFP PAUL BOTES

Zama LUTHULI

The cold corridors of South Africa’s once-mighty Komati coal-fired power plant have been quiet since its shutdown in 2022 in what was trumpeted as a pioneering project in the world’s transition to green energy.

Two years later, plans to repurpose the country’s oldest coal power plant have amounted to little in a process that offers caution and lessons for countries intending to reduce their reliance on fossil fuels and switch to renewables.

Jobs have been lost and construction for wind and solar energy generation has yet to start, with only a few small green projects under way.

“We cannot construct anything. We cannot remove anything from the site,” acting general manager Theven Pillay told AFP at the 63-year-old plant embedded in the coal belt in Mpumalanga province, where the air hangs thick with smog.

Poor planning and delays in paperwork to authorise the full decommissioning of the plant have been the main culprits for the standstill, he said. “We should have done things earlier. So we would consider it is not a success.”

Before it turned off the switches in October 2022, the plant fed 121 megawatts into South Africa’s chronically undersupplied and erratic electricity grid.

The transition plan — which won $497 million in funding from the World Bank — envisions the generation of 150 megawatts via solar and 70 megawatts from wind, with capacity for 150 megawatts of battery storage.

Workers are to be reskilled and the plant’s infrastructure, including its massive cooling towers, repurposed.

But much of this is still a long way off. “They effectively just shut down the coal plant and left the people to deal with the outcomes,” said deputy energy and electricity minister Samantha Graham.



– Disgruntled –



Coal provides 80 percent of South Africa’s power and the country is among the world’s top 12 largest greenhouse gas emitters. Coal is also a bedrock of its economy, employing around 90,000 people.

South Africa was the first country in the world to form a Just Energy Transition Partnership (JETP) with international funders to move off dirty power generation, already receiving $13.6 billion in total in grants and loans, Neil Cole of the JETP presidential committee told AFP.

Komati is the first coal plant scheduled for decommissioning, with five of the remaining 14 ones meant to follow by 2030.

It had directly employed 393 people, the state energy firm Eskom that owns the plant told AFP. Only 162 remain on site as others volunteered for transfer or accepted payouts.

The plant had been the main provider of employment in the small town, where the quiet streets are pitted with chunks of coal. Today, several houses are vacant as workers from other provinces headed home after losing their jobs.

“Our jobs ending traumatised us a lot as a community,” said Sizwe Shandu, 35, who had been contracted as a boilermaker at the plant since 2008.

The shutdown had been unexpected and left his family scrambling to make ends meet, he said. With South Africa’s unemployment rate topping 32 percent, Shandu now relies on government social grants to buy food and electricity.

Pillay admitted that many people in the town of Komati had a “disgruntled view” of the transition. One of the mistakes was that coal jobs were closed before new jobs were created, he said. People from the town did not always have the skills required for the emerging jobs.

Eskom has said it plans to eventually create 363 permanent jobs and 2,733 temporary jobs at Komati.

One of the green projects under way combines raising fish alongside vegetable patches supported by solar panels.

Seven people, from a planned 21, have been trained to work on the aquaponics scheme, including Bheki Nkabinde, 37.

“Eskom has helped me big time in terms of getting this opportunity because now I’ve got an income, I can be able to support my family,” he told AFP, as he walked among his spinach, tomatoes, parsley and spring onions.

The facility is also turning invasive plants into pellets that are an alternative fuel to coal and assembling mobile micro power grids fixed to containers. A coal milling workshop has been turned into a welding training room.

– Mistakes and lessons –


The missteps at Komati are lessons for other coal-fired power plants marked for shutdown, Pillay said. For example, some now plan to start up green energy projects parallel to the phasing out of fumes.

But the country is “not going to be pushed into making a decision around how quickly or how slowly we do the Just Energy Transition based on international expectations”, said Graham.

South Africa has seven percent renewable energy in its mix, up from one percent a decade ago, she said. And it will continue mining and exporting coal, with Eskom estimating that there are almost 200 years of supply still in the ground.

The goal is to have a “good energy mix that’s sustainable and stable”, Graham said.

Since South Africa’s JETP was announced, Indonesia, Vietnam and Senegal have struck similar deals, but there has been little progress towards actually closing coal plants under the mechanism.

Among the criticisms is that it offers largely market-rate lending terms, raising the threat of debt repayment problems for recipients.

 

Leaner large language models could enable efficient local use on phones and laptops


Princeton University, Engineering School





Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server — a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information. This type of leaner LLM could be stored and accessed locally on a device like a phone or laptop and could provide performance nearly as accurate and nuanced as an uncompressed version.

“Any time you can reduce the computational complexity, storage and bandwidth requirements of using AI models, you can enable AI on devices and systems that otherwise couldn’t handle such compute- and memory-intensive tasks,” said study coauthor Andrea Goldsmith, dean of Princeton’s School of Engineering and Applied Science and Arthur LeGrand Doty Professor of Electrical and Computer Engineering.

“When you use ChatGPT, whatever request you give it goes to the back-end servers of OpenAI, which process all of that data, and that is very expensive,” said coauthor Rajarshi Saha, a Stanford Engineering Ph.D. student. “So, you want to be able to do this LLM inference using consumer GPUs [graphics processing units], and the way to do that is by compressing these LLMs.” Saha’s graduate work is coadvised by Goldsmith and coauthor Mert Pilanci, an assistant professor at Stanford Engineering.

The researchers will present their new algorithm CALDERA, which stands for Calibration Aware Low precision DEcomposition with low Rank Adaptation, at the Conference on Neural Information Processing Systems (NeurIPS) in December. Saha and colleagues began this compression research not with LLMs themselves, but with the large collections of information that are used to train LLMs and other complex AI models, such as those used for image classification. This technique, a forerunner to the new LLM compression approach, was published in 2023.

Training data sets and AI models are both composed of matrices, or grids of numbers that are used to store data. In the case of LLMs, these are called weight matrices, which are numerical representations of word patterns learned from large swaths of text.

“We proposed a generic algorithm for compressing large data sets or large matrices,” said Saha. “And then we realized that nowadays, it’s not just the data sets that are large, but the models being deployed are also getting large. So, we could also use our algorithm to compress these models.”

While the team’s algorithm is not the first to compress LLMs, its novelty lies in an innovative combination of two properties, one called “low-precision,” the other “low-rank.” As digital computers store and process information as bits (zeros and ones), “low-precision” representation reduces the number of bits, speeding up storage and processing while improving energy efficiency. On the other hand, “low-rank” refers to reducing redundancies in the LLM weight matrices.

“Using both of these properties together, we are able to get much more compression than either of these techniques can achieve individually,” said Saha.

The team tested their technique using Llama 2 and Llama 3, open-source large language models released by Meta AI, and found that their method, which used low-rank and low-precision components in tandem with each other, can be used to improve other methods which use just low-precision. The improvement can be up to 5%, which is significant for metrics that measure uncertainty in predicting word sequences.

They evaluated the performance of the compressed language models using several sets of benchmark tasks for LLMs. The tasks included determining the logical order of two statements, or answering questions involving physical reasoning, such as how to separate an egg white from a yolk or how to make a cup of tea.

“I think it’s encouraging and a bit surprising that we were able to get such good performance in this compression scheme,” said Goldsmith, who moved to Princeton from Stanford Engineering in 2020. “By taking advantage of the weight matrix rather than just using a generic compression algorithm for the bits that are representing the weight matrix, we were able to do much better.”

Using an LLM compressed in this way could be suitable for situations that don’t require the highest possible precision. Moreover, the ability to fine-tune compressed LLMs on edge devices like a smartphone or laptop enhances privacy by allowing organizations and individuals to adapt models to their specific needs without sharing sensitive data with third-party providers. This reduces the risk of data breaches or unauthorized access to confidential information during the training process. To enable this, the LLMs must initially be compressed enough to fit on consumer-grade GPUs.

Saha also cautioned that running LLMs on a smartphone or laptop could hog the device’s memory for a period of time. “You won’t be happy if you are running an LLM and your phone drains out of charge in an hour,” said Saha. Low-precision computation can help reduce power consumption, he added. “But I wouldn’t say that there’s one single technique that solves all the problems. What we propose in this paper is one technique that is used in combination with techniques proposed in prior works. And I think this combination will enable us to use LLMs on mobile devices more efficiently and get more accurate results.”

The paper, “Compressing Large Language Models using Low Rank and Low Precision Decomposition,” will be presented at the Conference on Neural Information Processing Systems (NeurIPS) in December 2024. In addition to Goldsmith, Saha and Pilanci, coauthors include Stanford Engineering researchers Naomi Sagan and Varun Srivastava. This work was supported in part by the U.S. National Science Foundation, the U.S. Army Research Office, and the Office of Naval Research.

Asking ChatGPT vs Googling: Can AI chatbots boost human creativity?

The Conversation
November 18, 2024

Image via TippaPatt/Shutterstock.

Think back to a time when you needed a quick answer, maybe for a recipe or a DIY project. A few years ago, most people’s first instinct was to “Google it.” Today, however, many people are more likely to reach for ChatGPT, OpenAI’s conversational AI, which is changing the way people look for information.

Rather than simply providing lists of websites, ChatGPT gives more direct, conversational responses. But can ChatGPT do more than just answer straightforward questions? Can it actually help people be more creative?

I study new technologies and consumer interaction with social media. My colleague Byung Lee and I set out to explore this question: Can ChatGPT genuinely assist people in creatively solving problems, and does it perform better at this than traditional search engines like Google?


Across a series of experiments in a study published in the journal Nature Human Behavour, we found that ChatGPT does boost creativity, especially in everyday, practical tasks. Here’s what we learned about how this technology is changing the way people solve problems, brainstorm ideas and think creatively.

ChatGPT and creative tasks

Imagine you’re searching for a creative gift idea for a teenage niece. Previously, you might have googled “creative gifts for teens” and then browsed articles until something clicked. Now, if you ask ChatGPT, it generates a direct response based on its analysis of patterns across the web. It might suggest a custom DIY project or a unique experience, crafting the idea in real time.

To explore whether ChatGPT surpasses Google in creative thinking tasks, we conducted five experiments where participants tackled various creative tasks. For example, we randomly assigned participants to either use ChatGPT for assistance, use Google search, or generate ideas on their own. Once the ideas were collected, external judges, unaware of the participants’ assigned conditions, rated each idea for creativity. We averaged the judges’ scores to provide an overall creativity rating.

One task involved brainstorming ways to repurpose everyday items, such as turning an old tennis racket and a garden hose into something new. Another asked participants to design an innovative dining table. The goal was to test whether ChatGPT could help people come up with more creative solutions compared with using a web search engine or just their own imagination.


ChatGPT did well with the task of suggesting creative ideas for reusing household items. Simon Ritzmann/DigitalVision via Getty Images

The results were clear: Judges rated ideas generated with ChatGPT’s assistance as more creative than those generated with Google searches or without any assistance. Interestingly, ideas generated with ChatGPT – even without any human modification – scored higher in creativity than those generated with Google.

One notable finding was ChatGPT’s ability to generate incrementally creative ideas: those that improve or build on what already exists. While truly radical ideas might still be challenging for AI, ChatGPT excelled at suggesting practical yet innovative approaches. In the toy-design experiment, for example, participants using ChatGPT came up with imaginative designs, such as turning a leftover fan and a paper bag into a wind-powered craft.

Limits of AI creativity

ChatGPT’s strength lies in its ability to combine unrelated concepts into a cohesive response. Unlike Google, which requires users to sift through links and piece together information, ChatGPT offers an integrated answer that helps users articulate and refine ideas in a polished format. This makes ChatGPT promising as a creativity tool, especially for tasks that connect disparate ideas or generate new concepts.

It’s important to note, however, that ChatGPT doesn’t generate truly novel ideas. It recognizes and combines linguistic patterns from its training data, subsequently generating outputs with the most probable sequences based on its training. If you’re looking for a way to make an existing idea better or adapt it in a new way, ChatGPT can be a helpful resource. For something groundbreaking, though, human ingenuity and imagination are still essential.

Additionally, while ChatGPT can generate creative suggestions, these aren’t always practical or scalable without expert input. Steps such as screening, feasibility checks, fact-checking and market validation require human expertise. Given that ChatGPT’s responses may reflect biases in its training data, people should exercise caution in sensitive contexts such as those involving race or gender.

We also tested whether ChatGPT could assist with tasks often seen as requiring empathy, such as repurposing items cherished by a loved one. Surprisingly, ChatGPT enhanced creativity even in these scenarios, generating ideas that users found relevant and thoughtful. This result challenges the belief that AI cannot assist with emotionally driven tasks

Future of AI and creativity

As ChatGPT and similar AI tools become more accessible, they open up new possibilities for creative tasks. Whether in the workplace or at home, AI could assist in brainstorming, problem-solving and enhancing creative projects. However, our research also points to the need for caution: While ChatGPT can augment human creativity, it doesn’t replace the unique human capacity for truly radical, out-of-the-box thinking.

This shift from Googling to asking ChatGPT represents more than just a new way to access information. It marks a transformation in how people collaborate with technology to think, create and innovate.

Jaeyeon Chung, Assistant Professor of Business, Rice University


This article is republished from The Conversation under a Creative Commons license. Read the original article.
Is AI’s meteoric rise beginning to slow?

By AFP
November 18, 2024

OpenAI CEO Sam Altman fired off a social media post saying 'There is no wall' as fears arise over potential blockages to AI development - Copyright AFP/File Jason Redmond
Glenn CHAPMAN with Alex PIGMAN in Washington

A quietly growing belief in Silicon Valley could have immense implications: the breakthroughs from large AI models -– the ones expected to bring human-level artificial intelligence in the near future –- may be slowing down.

Since the frenzied launch of ChatGPT two years ago, AI believers have maintained that improvements in generative AI would accelerate exponentially as tech giants kept adding fuel to the fire in the form of data for training and computing muscle.

The reasoning was that delivering on the technology’s promise was simply a matter of resources –- pour in enough computing power and data, and artificial general intelligence (AGI) would emerge, capable of matching or exceeding human-level performance.

Progress was advancing at such a rapid pace that leading industry figures, including Elon Musk, called for a moratorium on AI research.

Yet the major tech companies, including Musk’s own, pressed forward, spending tens of billions of dollars to avoid falling behind.

OpenAI, ChatGPT’s Microsoft-backed creator, recently raised $6.6 billion to fund further advances.

xAI, Musk’s AI company, is in the process of raising $6 billion, according to CNBC, to buy 100,000 Nvidia chips, the cutting-edge electronic components that power the big models.

However, there appears to be problems on the road to AGI.

Industry insiders are beginning to acknowledge that large language models (LLMs) aren’t scaling endlessly higher at breakneck speed when pumped with more power and data.

Despite the massive investments, performance improvements are showing signs of plateauing.

“Sky-high valuations of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence,” said AI expert and frequent critic Gary Marcus. “As I have always warned, that’s just a fantasy.”

– ‘No wall’ –

One fundamental challenge is the finite amount of language-based data available for AI training.

According to Scott Stevenson, CEO of AI legal tasks firm Spellbook, who works with OpenAI and other providers, relying on language data alone for scaling is destined to hit a wall.

“Some of the labs out there were way too focused on just feeding in more language, thinking it’s just going to keep getting smarter,” Stevenson explained.

Sasha Luccioni, researcher and AI lead at startup Hugging Face, argues a stall in progress was predictable given companies’ focus on size rather than purpose in model development.

“The pursuit of AGI has always been unrealistic, and the ‘bigger is better’ approach to AI was bound to hit a limit eventually — and I think this is what we’re seeing here,” she told AFP.

The AI industry contests these interpretations, maintaining that progress toward human-level AI is unpredictable.

“There is no wall,” OpenAI CEO Sam Altman posted Thursday on X, without elaboration.

Anthropic’s CEO Dario Amodei, whose company develops the Claude chatbot in partnership with Amazon, remains bullish: “If you just eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027.”

– Time to think –

Nevertheless, OpenAI has delayed the release of the awaited successor to GPT-4, the model that powers ChatGPT, because its increase in capability is below expectations, according to sources quoted by The Information.

Now, the company is focusing on using its existing capabilities more efficiently.

This shift in strategy is reflected in their recent o1 model, designed to provide more accurate answers through improved reasoning rather than increased training data.

Stevenson said an OpenAI shift to teaching its model to “spend more time thinking rather than responding” has led to “radical improvements”.

He likened the AI advent to the discovery of fire. Rather than tossing on more fuel in the form of data and computer power, it is time to harness the breakthrough for specific tasks.

Stanford University professor Walter De Brouwer likens advanced LLMs to students transitioning from high school to university: “The AI baby was a chatbot which did a lot of improv'” and was prone to mistakes, he noted.

“The homo sapiens approach of thinking before leaping is coming,” he added.

How pirates steer corporate innovation: Lessons from the front lines

Calgary Innovation Week runs from Nov. 13-21, 2024.


By Chris Hogg
DIGITAL JOURNAL
November 19, 2024

Image generated by OpenAI's DALL-E via ChatGPT

If you ask Tina Mathas who should lead transformative innovation projects, she’ll tell you it’s all about the pirates.

“It requires a different type of mindset, a different type of ecosystem and environment, and it should be protected,” says Mathas, co-founder of Flow Factory, a company that aims to enhance human performance by integrating the concept of “flow state” with artificial intelligence.

For transformative innovation, she argues, big companies need pirates — not quite drunken Jack Sparrow adventurers, but individuals who challenge traditional processes and navigate uncharted waters of creativity and risk.

Mathas’s declaration set the tone for a lively virtual panel on corporate innovation at Calgary Innovation Week. The discussion brought together industry leaders to dissect how innovation can thrive in corporate environments often resistant to change.

The challenges, they agreed, are substantial, but the potential rewards for organizations that get it right are transformative.

Making the case for pirates

“Transformative innovation requires pirates,” Mathas said. “It’s not just about solving today’s problems — it’s about being bold and taking risks on where we think the industry is going.”

Mathas described her experience at ATB Financial, where her team was tasked with “breaking the bank.”

Operating with a $50,000 budget, they delivered a market-ready banking platform in just five weeks.

“We had no banking experience,” she said, “and people didn’t understand how we got that done. We had board support, and we had executive support. In other words, we reported directly into the executive and we were separate from the main organization.

 We were the pirates.”

This freedom is crucial, Mathas said, because transformative innovation rarely succeeds when confined by a corporation’s standard processes.

“According to an Accenture study, 82% of organizations run innovation in exactly the same way as any other regular project. Plus it takes about 18 months, and you’re still facing a 90% failure rate,” she said, telling the audience that is the reason she left the corporate world.


Innovation begins with people and alignment with business goals

Jeff Hiscock of Pembina Pipelines shifted the focus to the human element of innovation, emphasizing the challenges of workforce turnover and retention. Focus on the importance of building environments that retain experienced talent, while simultaneously attracting new entrants to the workforce, he advised.

“Thirty-five per cent of the energy workforce will turn over by 2035,” Hiscock said, referencing data from a provincial study. “A lot of that is through retirement. How do you create a workplace where those people want to stay in the roles longer?”

By focusing on creating workplaces that are innovative, engaging and adaptable, organizations can address this looming talent gap while driving forward their innovation goals.

Hiscock described innovation as a necessity, not a luxury, particularly in industries like energy.

“Innovation is about solving real problems that impact your business’s core value streams,” he said.

Pembina, for instance, focuses 70% of its innovation efforts on projects with direct EBITDA impacts, ensuring alignment with organizational goals.

However, Hiscock cautioned that innovation efforts often stall because of cultural resistance.

“What’s obvious to you is not obvious to everyone else,” he said. “It’s like playing a 4D chess game that only you can see. That’s a bad place to be.”

His solution? Securing buy-in from every level of the organization, not just senior executives.


From dollars to disruption

“Innovation isn’t about dollars, but it kind of is,” said Shannon Phillips, co-founder of Unbounded Thinking. Phillips’ work focuses on helping organizations, particularly small and medium-sized enterprises, implement effective innovation management systems.

He explained that many companies struggle to balance innovation’s creative potential with the financial realities of running a business.

“If we keep talking about this vague concept of innovation that is just about something new and breakthrough, we’ll never get the respect that we need. We really need to start looking at how we measure it to make it part of our DNA, and to make it a revenue stream in itself.”

Phillips outlined a structured approach to categorizing innovation: core (incremental improvements), adjacent (new markets or products), and breakthrough (disruptive technologies).

He emphasized focusing on core innovation first, as it carries the least risk, while building maturity and trust over time to approach higher-risk, breakthrough projects effectively. This holistic, balanced approach helps companies mitigate risks and align innovation with their capabilities and goals.

“For smaller companies, it’s not a buzzword — it’s about survival,” he said. “They need proof that innovation will help them grow and keep their doors open.”


Partnerships that deliver

Lee Evans, head of low-carbon activities at TC Energy, discussed how partnerships can drive innovation in meaningful ways.

“We think about win-wins,” Evans said. “How do we find ways to work with others to support each other?”

As an example, TC Energy recently invested and partnered with Qube Technologies, a Calgary-based emissions monitoring company, to address its decarbonization goals.

Evans highlighted the importance of starting small with innovation initiatives.

“Minimum viable products are really important,” he said. “You test, you learn and then you scale.” This approach minimizes risk while building trust in the process.

Evans also stressed the need for resilience and adaptability.

“If you want to be working in this space, you’ve got to be resilient. You’ve got to be willing to face challenges and setbacks and be willing to pivot. Those are really important. And never give up if you think there’s true value in what you’re up to. Find ways to make sure people understand the value of what you’re doing.”

The role of government and academia in innovation

Panelists also weighed in on how external forces, like government policies and academic research, shape innovation.

Mathas argued that governments should incentivize competition to stimulate corporate innovation. “We need more competition coming into Canada and into Alberta to create more of that incentive to want to compete and to want to innovate.”

On the academic front, Mathas cautioned universities in their efforts to turn researchers into entrepreneurs. She said universities should focus on supporting research, not forcing students to commercialize their ideas because it can lead to a loss of investment in the research that drives real innovation.

Key takeaways for corporate innovators

The panel left attendees with practical advice for navigating the complexities of corporate innovation:Start small, think big: “Innovate like a startup, scale like an enterprise,” said Mathas.
Embrace failure: “Failures are just learning in disguise,” she added.
Focus on core problems: Hiscock advised innovators to align their projects with a company’s key value streams.
Measure impact: “We need to make innovation part of the DNA,” said Phillips.
Be resilient: “Understand the value of what you’re doing and keep going,” said Evans.

As the panel concluded, one message was clear: the future belongs to those bold enough to embrace risk, empower people and innovate with purpose.
Whose data is it anyway? Indigenous voices call for accountability and data sovereignty in AI

Calgary Innovation Week runs from Nov. 13-21, 2024.


By Abigail Gamble
November 18, 2024
DIGITAL JOURNAL

Derek Eiteneier, Hayden Godfrey, Natasha Rabsatt, and Renard Jenkins (left to right) spoke at a Indigitech Destiny Innovation Summit panel on Monday, Nov. 18
. — Photo by Jennifer Friesen, Digital Journal

“Any usage of data that does not support data sovereignty, that does not support our economic reconciliation, does not support our interests, constitutes data genocide,” said Hayden Godfrey, director of Indigenous relations at Eighth Cortex.

Godfrey was on stage at Platform Calgary Innovation Centre this morning for the Indigitech Destiny Innovation Summit talking about data sovereignty and how to ethically integrate Indigenous knowledge, language and cultural data into artificial intelligence (AI) systems.

He was joined at the Calgary Innovation Week event by fellow panelists Natasha Rabsatt, founder of With These Lands to Talk, Renard Jenkins, president of I282 Technology Studios and Labs, and Derek Eiteneier, CEO of Outdoor Gala.

Together, they explored protecting Indigenous cultures, and what steps are necessary to build AI systems that foster inclusion rather than exploitation.

For Jenkins, AI tech itself isn’t what presents the greatest challenge to inclusion, but rather the people and power structures behind it.

“We should not be so concerned about the technology [of AI], but we should be concerned about who’s wielding the technology and who’s controlling the technology, and where that center of power comes in, with the technology,” he said.Renard Jenkins is the president and CEO of I2A2 Technologies, Studios & Labs. — Photo by Jennifer Friesen, Digital Journal
Here’s what data sovereignty means and why it matters so much

Data sovereignty ensures that data remains under the control of the communities it represents, a concept Jenkins sees as fundamental to ethical use of AI.

“One of the key things that we have to pay attention to is what data is being used for the foundational model of whichever AI system that you’re using,” he explained.

“That’s where we have the biggest opportunity right now to make sure that our foundational models look like the world that we live in — instead of looking like sometimes the individuals or the groups that actually build the models.”

Adding to the discussion, Eiteneier noted that often with AI tools “there’s bias in the overall data, or it’s missing data altogether.”

This incomplete picture can lead to misinformation or skewed representations — especially of minority or marginalized communities — if not carefully addressed.

“When we’re looking at Native and Indigenous communities, I think there is a lot of apprehension around how these technologies can actually be used, and be representative of cultures that were not historically [well] represented,” Rabsatt noted as well.

But she also emphasized the necessary balance of protecting cultural sovereignty, while embracing AI’s potential.

“I think if we see AI as a tool to augment our intelligence, and to automate, I think we can do something positive with that — with mindfulness, of course — and working together with other people and communities that have the same values.”

Natasha Rabsatt is the co-founder of If These Lands Could Talk. — Photo by Jennifer Friesen, Digital Journal

What the path forward for data sovereignty protection could (should?) look like

When it comes to establishing data sovereignty policies, Rabsatt highlighted the importance of determining what information should remain private and what should be shared.

“Especially with culturally sensitive information, it’s about asking: ‘What is it we don’t want in there? What shouldn’t be open source?’ Then we decide what information we do want to input, ensuring that it creates economic advantages for our community,” she said.

Jenkins emphasized the need for global collaboration in building ethical AI systems, warning against centralized power in the hands of, say, a few large companies.

“There are literally about seven or eight large language models that the majority of the artificial intelligence tools … are actually built upon. At this time, we do not have access to how those models were built, whose data was used for those models,” he explained.

That said, he also brought up some of the challenges of figuring out how to gain access to — and establish remuneration models for — culturally specific AI data.

— Photo by Jennifer Friesen, Digital Journal

“If we go into a regulated state where all of a sudden, individuals are forced to have to reveal what’s in their models, they’re forced to actually compensate the individuals whose IP has been utilized, we may see a lot of these models actually implode, because the cost will be much higher than what they could actually sustain.”

From a regulatory perspective, Godfrey proposed the creation of a binding code of ethics to ensure that AI developers respect Indigenous sovereignty and approach their work with transparency and accountability.

“I would like to see the development of a code of ethics that tech professionals need to abide by, not optionally, but have a mandate to abide by in interacting with this data,” he said.

“We need to ensure that technology is aligned with Indigenous values, that it serves as a tool for justice and reconciliation rather than exploitation. And that starts with respecting sovereignty, one ethical choice at a time.”
AMERIKA

Establishing Workers’ Right Not to Hear Bosses’ Propaganda


The NLRB rules that management may not compel employees to attend its anti-union meetings.



by Harold Meyerson
November 18, 2024

Bastiaan Slabbers/Sipa USA via AP Images
Union members await a speech by President Biden in Philadelphia, November 1, 2024.

One of the many ways that employers intimidate workers from joining unions is via the captive-audience meeting, in which those workers are subjected to their boss’s arguments against their unionizing. Employers require their workers to attend these meetings; not attending may be, depending on the boss’s mood, grounds for being penalized, demoted, or even discharged.

Last week, the National Labor Relations Board ruled that such meetings violate the National Labor Relations Act, which was designed to give workers a free choice in deciding whether they wished to join a union. By requiring workers’ attendance at such meetings, the Board ruled, those workers’ choice became less free.

Over the past four decades, captive-audience meetings have become standard management practice when workers seek to join a union. They are a prominent feature in the Union Busting 101 courses that anti-union attorneys and consultants provide to their business clients, both big and small. Union organizers have no tool in their arsenal that can match it: Not only can they not compel workers to do anything, but they’re also forbidden from organizing within the worksite. This asymmetry, the NLRB ruled, runs counter to the letter and spirit of the NLRA.


More from Harold Meyerson

Captive-audience meetings have also come under attack on a different front. During the past two years, ten states have outlawed them. The first was Minnesota, where the ban was enacted in early 2023, shortly after Democrats gained control of both houses of the legislature and Gov. Tim Walz signed it into law. Eight other blue trifecta states quickly followed: California, Connecticut, Hawaii, Illinois, Maine, New York, Oregon, and Washington. And a ban on those meetings was part of an omnibus pro-worker ballot measure (which also included a hike in the minimum wage and the establishment of paid sick leave) that Alaska voters enacted.

Those states, of course, didn’t base their decisions on the NLRA, which is federal law, but more simply on trying to level the playing field between employers and employees. One argument for banning the meetings that I’ve heard from some labor lawyers is that compelling employees to attend those meetings violates workers’ freedom from speech, which is something like the B-side of the First Amendment. As management lawyers take these state laws to court, they’re sure to argue that the NLRA preempts the rights of the states to address this issue. That’s one more reason why last week’s NLRB decision is so important.

That said, the NLRB is the federal agency most subject to reversing its own rulings, depending on who the president is. The Board consists of three members nominated by the president and two by the opposition party, and while members’ terms are staggered, eventually the appointees of the new president outnumber the appointees of the old. That’s why, for instance, a ruling from the Obama-era Board that said that graduate students working at private universities as teaching and research assistants were employees and thus eligible for union membership and collective bargaining was struck down by the Trump-era Board and then reinstated by the Biden Board. In all likelihood, it will be struck down again once the Board is dominated by Trump’s appointees.


What makes this reversal unlike the previous ones is that it’s only in the past couple of years, since the Biden Board gave those grad students the green light, that many thousands of them have organized most of America’s leading private universities. (The NLRA only covers private-sector employees; the 48,000 University of California grad student/employees, for instance, have unionized and won contracts because California state law permits public employees to collectively bargain.) Should the newly unionized private universities (Harvard, Yale, MIT, Caltech, etc.) revoke their grad students’ union recognition and rescind their contracts, it shouldn’t come as a surprise if those students strike and the nation’s foremost universities come to a screeching halt.

How soon could Trump appointees dominate the Board? That may be up to the current lame-duck session of the Senate, which since June has had before it the renomination of Lauren McFerran, who under Biden has been the Board chair. If she is confirmed again, she would be part of the three-member pro-worker Board majority. Until their own majority expires at year’s end, Senate Democrats have pledged to keep ratifying the judicial nominees Biden has put before them, and have come under understandable pressure from labor to confirm McFerran as well. If they do, the next expiration of a pro-worker member’s term won’t come until 2026.

Some Democrats reportedly fear that Trump will simply fire all three Democrats if the Senate reconfirms McFerran, a move that is legally contestable (not that that would deter Trump). Then again, not confirming McFerran would also quickly give the Board over to Trump.

Under McFerran, and with the strong prodding of NLRB General Counsel Jennifer Abruzzo, the Biden Board has been the most committed to ensuring workers’ rights since Franklin Roosevelt’s. It has limited employers’ ability to delay union recognition elections, increased the payments employers must make to employees they illegally fired to deter organizing campaigns, and mandated more significant remedies (including compelling employers to enter collective bargaining with their workers’ union) when employers violate the NLRA in seeking to deter unionization. Eliminating captive-audience meetings is a capstone of sorts to the Board’s campaign to restore to American workers the rights they once enjoyed, before Republicans in the White House, Congress, statehouses, and the courts concluded that worker power was an inherent threat to capital and their campaign contributors. Even if those Republicans now taking power sweep away the Biden Board’s rulings, though, those rulings would be the starting point for the next iteration of worker rights when the Democrats, as they surely will, return to power themselves.


Harold Meyerson is editor at large of The American Prospect.
Quirks of Right-Wing Populism

Far-right populists do share some things with the left. But boss rule Trumps them all.



Looking for some dim silver linings, some progressives have made the accurate observation that some right-wing populists have criticisms of capitalism that mirror the left’s. They may be, if not useful idiots, occasional allies.

For instance, Robert F. Kennedy Jr. has criticized Big Food and Big Pharma. If he can just drop some of his more lunatic views, as secretary of HHS he might shine a useful spotlight and revise some bad industry practices.

Dr. Deborah Birx, COVID response coordinator during Trump’s first term, said Sunday she expects that Kennedy’s nomination will lead to illuminating discussions about public health. Speaking on CBS’s Face the Nation, she said, "I’m actually excited that in a Senate hearing he would bring forward his data and the questions that come from the senators would bring forth their data."

CBS showed a clip of Kennedy saying, "I’m just going to tell the cereal companies to take all the dyes out of their food. I’ll get processed food out of school lunch immediately. Ten percent of food stamps go to sugar drinks to, you know, sodas. We’re creating diabetes problem, and our kids are giving them food that’s poison, and I’m going to stop that."

Birx is actually a serious person. She served as Obama’s global AIDS coordinator. Could she be onto something?

The populist right also has mixed feelings about tech billionaire monopolies (Elon Musk and Peter Thiel excepted) because of their fundraising for Democrats and their socially liberal views. Our friend Matt Stoller published a startling item on the admiration of Matt Gaetz for FTC Chair Lina Khan, charmingly describing Gaetz as a "Khanservative."

As Stoller wrote, "Gaetz proudly calls himself a Lina Khan fan, and filed a brief with the conservative Fifth Circuit asserting that the Federal Trade Commission has the authority to ban non-compete agreements, and personally hosted her as a guest on a show on Newsmax to discuss how to get rid of ‘creepy’ commercial surveillance. He has praised the Biden Antitrust Division’s Jonathan Kanter’s work on Google."

Stoller also quoted Gaetz: "It is my belief that the number one threat to our liberty is big government. It is also my belief that the number two big threat to our liberty is big business, when big business is able to use the apparatus of government to wrap around its objectives."

This sounds hopeful, but Gaetz may well not get confirmed as attorney general. And if he does, he still has to answer to Trump, who could easily find antitrust officials with his own highly selective views of which monopoly abuses to go after and which to give a pass.

And while Kennedy does have some views that are critical of Big Food and Big Pharma, consider what happened on the food front over the weekend.

In the past, RFK Jr. has been highly critical of Trump’s diet. "The stuff that he eats is really, like, bad," he said. "Campaign food is always bad, but the food that goes onto that airplane is, like, just poison. You have a choice between—you don’t have the choice, you’re either given KFC or Big Macs. That’s, like, when you’re lucky, and then the rest of the stuff I consider kind of inedible."

Well, on Sunday, all the talk shows showed images of Trump forcing Kennedy to choke down a burger, fries, and a Coke.

The deeper problem with far-right populism is that the boss is the boss of bosses. Because far fringe appointees like Gaetz and Kennedy, if confirmed, will be entirely creatures of Trump’s whims, they will do what he says.

E.E. Cummings wrote in a poem, celebrating a brave conscientious objector named Olaf who was brutalized by his captors, declaring, "There is some shit I will not eat." That evidently does not describe RFK Jr.


~ ROBERT KUTTNER
The American Prospect.