Thursday, April 24, 2025

SK hynix posts record profits thanks to strong AI demand


By AFP
April 23, 2025


Company and national flags fly outside the SK hynix Bundang office in Seongnam - Copyright AFP/File Jung Yeon-je

South Korean chip giant SK hynix reported record quarterly profits Thursday thanks to soaring global demand for artificial intelligence, highlighting the firm’s ability to weather mounting tariff threats.

The world’s second-largest memory chip maker dominates the market for high-bandwidth memory (HBM) semiconductors and is a key supplier for US titan Nvidia.

SK hynix said it recorded an operating profit of 7.44 trillion won ($5.19 billion) — a nearly 158 percent year-on-year increase — on revenues of 17.64 trillion won from January–March.

Both figures marked the company’s second-highest quarterly results on record, following last quarter’s performance.

The news comes after Taiwanese chip giant TSMC last week announced a surge in net profit for the first quarter and forecast robust demand for artificial intelligence technology, despite the spectre of US tariffs on the critical sector.

Net income also quadrupled compared to the previous year to 8.11 trillion won ($5.67 billion), with the firm saying the “memory market ramped up faster than expected due to competition to develop AI systems and inventory accumulation demand”.

The company added that its annual HBM sales for this year are expected to double compared to last year.

Despite the news, SK hynix’s shares fell more than one percent in Seoul morning trade.



– Less affected –



South Korea is a major exporter to the United States and its powerhouse semiconductor and auto industries would suffer greatly under President Donald Trump’s looming 25 percent tariffs.

The country is also home to the world’s largest memory chip maker, Samsung.

Experts say SK hynix’s resilience is because of the company’s growth in the DRAM market.

SK hynix recently took the lead in DRAM revenues with a 36 percent market share, according to specialist research firm Counterpoint, surpassing Samsung for the first time and marking the first change in the top spot in over four decades.

“Right now the world is focused on the impact of tariffs, so the question is: what’s going to happen with HBM DRAM?” said Counterpoint research director MS Hwang.

“At least in the short term, the segment is less likely to be affected by any trade shock as AI demand should remain strong. More significantly, the end product for HBM is AI servers, which — by definition — can be borderless.”

During a conference call, SK hynix noted that “uncertainty has grown around demand for semiconductors”, but sales plans for key clients for the company this year “remain unchanged”.

“Global customers are, overall, maintaining their previously discussed memory demand levels with us,” said an SK hynix official.

“Additionally, some clients are pulling forward demand by requesting short-term supply advances,” the company said.

The company also noted that while roughly three-fifths of its sales are to US-based customers, tariffs apply only to products shipped directly to the United States.

“Even when our clients are headquartered in the US, memory products are often shipped to locations outside the US, meaning the actual proportion of direct exports to the US is not particularly high,” an SK hynix official said.

AI will boost the service sector but it will never replace the human


ByDr. Tim Sandle
April 22, 2025
DIGITAL JOURNAL


Worker answering calls. — Image by © Tim Sandle.

Working in a call centre can be challenging. Customers are often distressed, and workloads can be significant, leading to mounting pressure to resolve issues quickly and effectively.

This is where AI has the potential to revolutionise the way agents work, enabling them to address problems more efficiently and provide customers with highly personalised solutions. This is a potential to create better productivity, but AI is unlikley to fully replace the human operator, according to a leading expert.

But how will AI transform the call centre, and could it eventually replace agents entirely?

Digital Journal has Ben Booth, CEO and Co-Founder of MaxContact, a contact centre software specialist, who has weighed in on AI completely replacing traditional call centres and what call centres can do to optimise developing technology.

Booth explains the possible advanatges: “As AI technology has developed in the past few years, more businesses than ever are now using it to enhance customer experience, call centres included. While replacing traditional call centres completely in favour of chatbots by some companies is certainly an interesting move, it’s not the only industry to be doing so.”

Job losses?

Booth considers job cuts to be unlikely in the longer-term, noting: “Indeed, there has been a lot of talk over the past year about job losses in call centres due to AI. But even though it may speed up response times, many people call customer service lines in the first place because they want to talk to a real person who will not only help them with any problems they may be having, but also someone who will understand and care about their query.”

Customer-centric

Booth foresees AI as boosting the customer-centric remit of most call centres: “According to our survey on agent and team performance, the top priorities for contact centre leaders are delivering excellent service (47.0%) and ensuring team happiness (46.6%). Therefore, in order to achieve these goals, call centres need to focus on building a customer-centric culture that empowers agents to deliver great experiences. Investing in training that enhances product knowledge, communication, and problem-solving while equipping agents with the tools and authority to resolve customer issues efficiently is key.”

Limited now, potential future

AI is, however, not suffiicently advanced to deliver efficiency to the maximum level: “Unfortunately, at the moment AI technologies are not advanced enough to empathise in the same way humans can. Not only that, but some problems that people need help with don’t have clear-cut answers, and at the moment AI technologies on their own cannot improvise or offer personalised solutions in the same way that humans can.”

Booth reveals an example to illustrate his point: “I’ve recently read about a company that has chosen to completely turn off their phone lines in favour of using AI and chatbots, and reviews suggest they have had less success in terms of customer satisfaction and resolving issues.”

So, what does this mean in terms of practical value? Booth advises: “Therefore, instead of opting to completely replace traditional call centres in favour of AI, call centres should be using new technology to leverage and add to their business to enhance customer service, rather than replacing humans completely.”

Automation

Booth concludes with what AI can achieve today: “Incorporating AI into call centres offers significant benefits without the need to replace agents entirely. With an average of 35% of calls manually evaluated each week by contact centres, AI speech analytics can help manage this by automatically identifying sentiment analysis from any calls, allowing agents to focus on more complex, value-added interactions. Tools like chatbots and knowledge bases reduce the volume of simple enquiries, while AI-powered workforce management systems optimise staffing levels and predict demand more accurately.”

In short, this means that humans could have more time: “By freeing agents from repetitive manual tasks, AI enhances efficiency and supports a healthier work-life balance without compromising job satisfaction or performance. It’s about creating a balanced approach where AI complements human skills, driving both operational improvements and agent well-being.”

“It can also help to improve and empower agents’ responses and recommendations to customers. While AI cannot fully replace human empathy, it can make suggestions that help agents deal with difficult situations and, in the long run, improve sales and business.”

Future perfect?

Booth ends by speculating what AI might deliver: “Overall, it is not yet clear whether AI will reach a point where it can ever fully replace agents in call centres. Instead, businesses should consider how they can use cutting-edge technology to improve their operations and increase sales and profits.”

New approach makes AI adaptable for computer vision in crop breeding



University of Illinois at Urbana-Champaign, News Bureau
Portrait of Andrew Leakey 

image: 

Andrew Leakey and his colleagues developed an AI tool that uses minimal training to teach itself to distinguish the flowers of thousands of varieties of Miscanthus, a plant used in biofuels production.

view more 

Credit: Photo by Craig Pessman




CHAMPAIGN, Ill. — Scientists developed a machine-learning tool that can teach itself, with minimal external guidance, to differentiate between aerial images of flowering and nonflowering grasses — an advance that will greatly increase the pace of agricultural field research, they say. The work was conducted using images of thousands of varieties of Miscanthus grasses, each of which has its own flowering traits and timing.

Accurately differentiating crop traits under varied conditions at different points in the growing cycle is a formidable task, said Andrew Leakey, a professor of plant biology and of crop sciences at the University of Illinois Urbana-Champaign, who led the new work with Sebastian Varela, a scientist at the Center for Advanced Bioenergy and Bioproducts Innovation, which Leakey directs.

The new approach should be applicable to numerous other crops and computer-vision problems, Leakey said.

The findings are reported in the journal Plant Physiology.

“Flowering time is a key trait influencing productivity and the adaptation of many crops, including Miscanthus, to different growing regions,” Leakey said. “But repetitive visual inspections of thousands of individual plants grown in extensive field trials is very labor intensive.” Automating that process by collecting images via aerial drones and using artificial intelligence to extract the relevant data from those images can streamline the process and make it more manageable. But building AI models that can distinguish subtle features in complex images usually requires vast amounts of human-annotated data, Leakey said. “Generating that data is very time-consuming. And deep-learning methods tend to be very context-dependent.”

This means that when the context changes — for example, when the model must distinguish the features of a different crop or the same crop at different locations or times of year — it likely will need to be retrained using new annotated images that reflect those new conditions, he said.

“There are tons of examples where people have provided proof-of-concept for using AI to accelerate the use of sensor technologies — ranging from leaf sensors to satellites — across applications in breeding, soil and crop sciences, but it’s not being very widely adopted right now, or not as widely adopted as you might hope. We think one of the big reasons for that is this huge amount of effort needed to train the AI tool,” Leakey said.

To cut down on the need for human-annotated training data, Varela turned to a well-known method for prompting two AI models to compete with one another in what is known as a “generative adversarial network,” or GAN. A common application of GANs is for one model to generate fake images of a desired scene and for a second model to review the images to determine which are fake and which are real. Over time, the models improve one another, Varela said. Model one generates more realistic fakes, and model two gets better at distinguishing the fake images from the real ones.

In the process, the models gain visual expertise in the specific subject matter, allowing them to better parse the details of any new images they encounter. Varela hypothesized that he could put this self-generated expertise to work to reduce the number of annotated images required to train the models to distinguish among many different crops. In the process, he created an “efficiently supervised generative and adversarial network,” or ESGAN.

In a series of experiments, the researchers tested the accuracy of their ESGAN against existing AI training protocols. They found that ESGAN “reduced the requirement for human-annotated data by one-to-two orders of magnitude” over “traditional, fully supervised learning approaches.”

The new findings represent a major reduction in the effort needed to develop and use custom-trained machine-learning models to determine flowering time “involving other locations, breeding populations or species,” the researchers report. “And the approach paves the way to overcome similar challenges in other areas of biology and digital agriculture.”

Leakey and Varela will continue to work with Miscanthus breeder Erik Sacks to apply the new method to data from a multistate Miscanthus breeding trial. The trial aims to develop regionally adapted lines of Miscanthus that can be used as a feedstock to produce biofuels and high value bioproducts on land that is not currently profitable to farm.

“We hope our new approach can be used by others to ease the adoption of AI tools for crop improvement involving a wider variety of traits and species, thereby helping to broadly bolster the bioeconomy,” Leakey said.

Leakey is a professor in the Carl R. Woese Institute for Genomic Biology, the Institute for Sustainability, Energy and Environment and the Center for Digital Agriculture at the U. of I.

The U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research; the U.S. Department of Agriculture, Agriculture and Food Research Initiative; and Tito’s Handmade Vodka supported this research. 

 

Editor’s note:  

To reach Andrew Leakey, email leakey@illinois.edu.  

To reach Sebastian Varela, email sv79@illinois.edu.
 

The paper “Breaking the barrier of human-annotated training data for machine-learning-aided plant research using aerial imagery” is available online.


DOI: 10.1093/plphys/kiaf132

No comments: