SK hynix posts record profits thanks to strong AI demand
By AFP
April 23, 2025

Company and national flags fly outside the SK hynix Bundang office in Seongnam - Copyright AFP/File Jung Yeon-je
South Korean chip giant SK hynix reported record quarterly profits Thursday thanks to soaring global demand for artificial intelligence, highlighting the firm’s ability to weather mounting tariff threats.
The world’s second-largest memory chip maker dominates the market for high-bandwidth memory (HBM) semiconductors and is a key supplier for US titan Nvidia.
SK hynix said it recorded an operating profit of 7.44 trillion won ($5.19 billion) — a nearly 158 percent year-on-year increase — on revenues of 17.64 trillion won from January–March.
Both figures marked the company’s second-highest quarterly results on record, following last quarter’s performance.
The news comes after Taiwanese chip giant TSMC last week announced a surge in net profit for the first quarter and forecast robust demand for artificial intelligence technology, despite the spectre of US tariffs on the critical sector.
Net income also quadrupled compared to the previous year to 8.11 trillion won ($5.67 billion), with the firm saying the “memory market ramped up faster than expected due to competition to develop AI systems and inventory accumulation demand”.
The company added that its annual HBM sales for this year are expected to double compared to last year.
Despite the news, SK hynix’s shares fell more than one percent in Seoul morning trade.
– Less affected –
South Korea is a major exporter to the United States and its powerhouse semiconductor and auto industries would suffer greatly under President Donald Trump’s looming 25 percent tariffs.
The country is also home to the world’s largest memory chip maker, Samsung.
Experts say SK hynix’s resilience is because of the company’s growth in the DRAM market.
SK hynix recently took the lead in DRAM revenues with a 36 percent market share, according to specialist research firm Counterpoint, surpassing Samsung for the first time and marking the first change in the top spot in over four decades.
“Right now the world is focused on the impact of tariffs, so the question is: what’s going to happen with HBM DRAM?” said Counterpoint research director MS Hwang.
“At least in the short term, the segment is less likely to be affected by any trade shock as AI demand should remain strong. More significantly, the end product for HBM is AI servers, which — by definition — can be borderless.”
During a conference call, SK hynix noted that “uncertainty has grown around demand for semiconductors”, but sales plans for key clients for the company this year “remain unchanged”.
“Global customers are, overall, maintaining their previously discussed memory demand levels with us,” said an SK hynix official.
“Additionally, some clients are pulling forward demand by requesting short-term supply advances,” the company said.
The company also noted that while roughly three-fifths of its sales are to US-based customers, tariffs apply only to products shipped directly to the United States.
“Even when our clients are headquartered in the US, memory products are often shipped to locations outside the US, meaning the actual proportion of direct exports to the US is not particularly high,” an SK hynix official said.
AI will boost the service sector but it will never replace the human
ByDr. Tim Sandle
April 22, 2025DIGITAL JOURNAL

Worker answering calls. — Image by © Tim Sandle.
Working in a call centre can be challenging. Customers are often distressed, and workloads can be significant, leading to mounting pressure to resolve issues quickly and effectively.
This is where AI has the potential to revolutionise the way agents work, enabling them to address problems more efficiently and provide customers with highly personalised solutions. This is a potential to create better productivity, but AI is unlikley to fully replace the human operator, according to a leading expert.
But how will AI transform the call centre, and could it eventually replace agents entirely?
Digital Journal has Ben Booth, CEO and Co-Founder of MaxContact, a contact centre software specialist, who has weighed in on AI completely replacing traditional call centres and what call centres can do to optimise developing technology.
Booth explains the possible advanatges: “As AI technology has developed in the past few years, more businesses than ever are now using it to enhance customer experience, call centres included. While replacing traditional call centres completely in favour of chatbots by some companies is certainly an interesting move, it’s not the only industry to be doing so.”
Job losses?
Booth considers job cuts to be unlikely in the longer-term, noting: “Indeed, there has been a lot of talk over the past year about job losses in call centres due to AI. But even though it may speed up response times, many people call customer service lines in the first place because they want to talk to a real person who will not only help them with any problems they may be having, but also someone who will understand and care about their query.”
Customer-centric
Booth foresees AI as boosting the customer-centric remit of most call centres: “According to our survey on agent and team performance, the top priorities for contact centre leaders are delivering excellent service (47.0%) and ensuring team happiness (46.6%). Therefore, in order to achieve these goals, call centres need to focus on building a customer-centric culture that empowers agents to deliver great experiences. Investing in training that enhances product knowledge, communication, and problem-solving while equipping agents with the tools and authority to resolve customer issues efficiently is key.”
Limited now, potential future
AI is, however, not suffiicently advanced to deliver efficiency to the maximum level: “Unfortunately, at the moment AI technologies are not advanced enough to empathise in the same way humans can. Not only that, but some problems that people need help with don’t have clear-cut answers, and at the moment AI technologies on their own cannot improvise or offer personalised solutions in the same way that humans can.”
Booth reveals an example to illustrate his point: “I’ve recently read about a company that has chosen to completely turn off their phone lines in favour of using AI and chatbots, and reviews suggest they have had less success in terms of customer satisfaction and resolving issues.”
So, what does this mean in terms of practical value? Booth advises: “Therefore, instead of opting to completely replace traditional call centres in favour of AI, call centres should be using new technology to leverage and add to their business to enhance customer service, rather than replacing humans completely.”
Automation
Booth concludes with what AI can achieve today: “Incorporating AI into call centres offers significant benefits without the need to replace agents entirely. With an average of 35% of calls manually evaluated each week by contact centres, AI speech analytics can help manage this by automatically identifying sentiment analysis from any calls, allowing agents to focus on more complex, value-added interactions. Tools like chatbots and knowledge bases reduce the volume of simple enquiries, while AI-powered workforce management systems optimise staffing levels and predict demand more accurately.”
In short, this means that humans could have more time: “By freeing agents from repetitive manual tasks, AI enhances efficiency and supports a healthier work-life balance without compromising job satisfaction or performance. It’s about creating a balanced approach where AI complements human skills, driving both operational improvements and agent well-being.”
“It can also help to improve and empower agents’ responses and recommendations to customers. While AI cannot fully replace human empathy, it can make suggestions that help agents deal with difficult situations and, in the long run, improve sales and business.”
Future perfect?
Booth ends by speculating what AI might deliver: “Overall, it is not yet clear whether AI will reach a point where it can ever fully replace agents in call centres. Instead, businesses should consider how they can use cutting-edge technology to improve their operations and increase sales and profits.”

Worker answering calls. — Image by © Tim Sandle.
Working in a call centre can be challenging. Customers are often distressed, and workloads can be significant, leading to mounting pressure to resolve issues quickly and effectively.
This is where AI has the potential to revolutionise the way agents work, enabling them to address problems more efficiently and provide customers with highly personalised solutions. This is a potential to create better productivity, but AI is unlikley to fully replace the human operator, according to a leading expert.
But how will AI transform the call centre, and could it eventually replace agents entirely?
Digital Journal has Ben Booth, CEO and Co-Founder of MaxContact, a contact centre software specialist, who has weighed in on AI completely replacing traditional call centres and what call centres can do to optimise developing technology.
Booth explains the possible advanatges: “As AI technology has developed in the past few years, more businesses than ever are now using it to enhance customer experience, call centres included. While replacing traditional call centres completely in favour of chatbots by some companies is certainly an interesting move, it’s not the only industry to be doing so.”
Job losses?
Booth considers job cuts to be unlikely in the longer-term, noting: “Indeed, there has been a lot of talk over the past year about job losses in call centres due to AI. But even though it may speed up response times, many people call customer service lines in the first place because they want to talk to a real person who will not only help them with any problems they may be having, but also someone who will understand and care about their query.”
Customer-centric
Booth foresees AI as boosting the customer-centric remit of most call centres: “According to our survey on agent and team performance, the top priorities for contact centre leaders are delivering excellent service (47.0%) and ensuring team happiness (46.6%). Therefore, in order to achieve these goals, call centres need to focus on building a customer-centric culture that empowers agents to deliver great experiences. Investing in training that enhances product knowledge, communication, and problem-solving while equipping agents with the tools and authority to resolve customer issues efficiently is key.”
Limited now, potential future
AI is, however, not suffiicently advanced to deliver efficiency to the maximum level: “Unfortunately, at the moment AI technologies are not advanced enough to empathise in the same way humans can. Not only that, but some problems that people need help with don’t have clear-cut answers, and at the moment AI technologies on their own cannot improvise or offer personalised solutions in the same way that humans can.”
Booth reveals an example to illustrate his point: “I’ve recently read about a company that has chosen to completely turn off their phone lines in favour of using AI and chatbots, and reviews suggest they have had less success in terms of customer satisfaction and resolving issues.”
So, what does this mean in terms of practical value? Booth advises: “Therefore, instead of opting to completely replace traditional call centres in favour of AI, call centres should be using new technology to leverage and add to their business to enhance customer service, rather than replacing humans completely.”
Automation
Booth concludes with what AI can achieve today: “Incorporating AI into call centres offers significant benefits without the need to replace agents entirely. With an average of 35% of calls manually evaluated each week by contact centres, AI speech analytics can help manage this by automatically identifying sentiment analysis from any calls, allowing agents to focus on more complex, value-added interactions. Tools like chatbots and knowledge bases reduce the volume of simple enquiries, while AI-powered workforce management systems optimise staffing levels and predict demand more accurately.”
In short, this means that humans could have more time: “By freeing agents from repetitive manual tasks, AI enhances efficiency and supports a healthier work-life balance without compromising job satisfaction or performance. It’s about creating a balanced approach where AI complements human skills, driving both operational improvements and agent well-being.”
“It can also help to improve and empower agents’ responses and recommendations to customers. While AI cannot fully replace human empathy, it can make suggestions that help agents deal with difficult situations and, in the long run, improve sales and business.”
Future perfect?
Booth ends by speculating what AI might deliver: “Overall, it is not yet clear whether AI will reach a point where it can ever fully replace agents in call centres. Instead, businesses should consider how they can use cutting-edge technology to improve their operations and increase sales and profits.”
New approach makes AI adaptable for computer vision in crop breeding
University of Illinois at Urbana-Champaign, News Bureau
image:
Andrew Leakey and his colleagues developed an AI tool that uses minimal training to teach itself to distinguish the flowers of thousands of varieties of Miscanthus, a plant used in biofuels production.
view moreCredit: Photo by Craig Pessman
CHAMPAIGN, Ill. — Scientists developed a machine-learning tool that can teach itself, with minimal external guidance, to differentiate between aerial images of flowering and nonflowering grasses — an advance that will greatly increase the pace of agricultural field research, they say. The work was conducted using images of thousands of varieties of Miscanthus grasses, each of which has its own flowering traits and timing.
Accurately differentiating crop traits under varied conditions at different points in the growing cycle is a formidable task, said Andrew Leakey, a professor of plant biology and of crop sciences at the University of Illinois Urbana-Champaign, who led the new work with Sebastian Varela, a scientist at the Center for Advanced Bioenergy and Bioproducts Innovation, which Leakey directs.
The new approach should be applicable to numerous other crops and computer-vision problems, Leakey said.
The findings are reported in the journal Plant Physiology.
“Flowering time is a key trait influencing productivity and the adaptation of many crops, including Miscanthus, to different growing regions,” Leakey said. “But repetitive visual inspections of thousands of individual plants grown in extensive field trials is very labor intensive.” Automating that process by collecting images via aerial drones and using artificial intelligence to extract the relevant data from those images can streamline the process and make it more manageable. But building AI models that can distinguish subtle features in complex images usually requires vast amounts of human-annotated data, Leakey said. “Generating that data is very time-consuming. And deep-learning methods tend to be very context-dependent.”
This means that when the context changes — for example, when the model must distinguish the features of a different crop or the same crop at different locations or times of year — it likely will need to be retrained using new annotated images that reflect those new conditions, he said.
“There are tons of examples where people have provided proof-of-concept for using AI to accelerate the use of sensor technologies — ranging from leaf sensors to satellites — across applications in breeding, soil and crop sciences, but it’s not being very widely adopted right now, or not as widely adopted as you might hope. We think one of the big reasons for that is this huge amount of effort needed to train the AI tool,” Leakey said.
To cut down on the need for human-annotated training data, Varela turned to a well-known method for prompting two AI models to compete with one another in what is known as a “generative adversarial network,” or GAN. A common application of GANs is for one model to generate fake images of a desired scene and for a second model to review the images to determine which are fake and which are real. Over time, the models improve one another, Varela said. Model one generates more realistic fakes, and model two gets better at distinguishing the fake images from the real ones.
In the process, the models gain visual expertise in the specific subject matter, allowing them to better parse the details of any new images they encounter. Varela hypothesized that he could put this self-generated expertise to work to reduce the number of annotated images required to train the models to distinguish among many different crops. In the process, he created an “efficiently supervised generative and adversarial network,” or ESGAN.
In a series of experiments, the researchers tested the accuracy of their ESGAN against existing AI training protocols. They found that ESGAN “reduced the requirement for human-annotated data by one-to-two orders of magnitude” over “traditional, fully supervised learning approaches.”
The new findings represent a major reduction in the effort needed to develop and use custom-trained machine-learning models to determine flowering time “involving other locations, breeding populations or species,” the researchers report. “And the approach paves the way to overcome similar challenges in other areas of biology and digital agriculture.”
Leakey and Varela will continue to work with Miscanthus breeder Erik Sacks to apply the new method to data from a multistate Miscanthus breeding trial. The trial aims to develop regionally adapted lines of Miscanthus that can be used as a feedstock to produce biofuels and high value bioproducts on land that is not currently profitable to farm.
“We hope our new approach can be used by others to ease the adoption of AI tools for crop improvement involving a wider variety of traits and species, thereby helping to broadly bolster the bioeconomy,” Leakey said.
Leakey is a professor in the Carl R. Woese Institute for Genomic Biology, the Institute for Sustainability, Energy and Environment and the Center for Digital Agriculture at the U. of I.
The U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research; the U.S. Department of Agriculture, Agriculture and Food Research Initiative; and Tito’s Handmade Vodka supported this research.
Editor’s note:
To reach Andrew Leakey, email leakey@illinois.edu.
To reach Sebastian Varela, email sv79@illinois.edu.
The paper “Breaking the barrier of human-annotated training data for machine-learning-aided plant research using aerial imagery” is available online.
Journal
PLANT PHYSIOLOGY
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
Breaking the barrier of human-annotated training data for machine-learning-aided plant research using aerial imagery
Article Publication Date
23-Apr-2025
COI Statement
A patent on ESGAN has been filed by the University of Illinois Urbana-Champaign with A.D.B.L. and S.V. as inventors. The authors declare no conflict of interest.
Awkward. Humans are still better than AI at reading the room
Johns Hopkins research shows AI models fall short in predicting social interactions
Johns Hopkins University
Humans, it turns out, are better than current AI models at describing and interpreting social interactions in a moving scene—a skill necessary for self-driving cars, assistive robots, and other technologies that rely on AI systems to navigate the real world.
The research, led by scientists at Johns Hopkins University, finds that artificial intelligence systems fail at understanding social dynamics and context necessary for interacting with people and suggests the problem may be rooted in the infrastructure of AI systems.
“AI for a self-driving car, for example, would need to recognize the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street,” said lead author Leyla Isik, an assistant professor of cognitive science at Johns Hopkins University. “Any time you want an AI to interact with humans, you want it to be able to recognize what people are doing. I think this sheds light on the fact that these systems can’t right now.”
Kathy Garcia, a doctoral student working in Isik’s lab at the time of the research and co–first author, will present the research findings at the International Conference on Learning Representations on April 24.
To determine how AI models measure up compared to human perception, the researchers asked human participants to watch three-second videoclips and rate features important for understanding social interactions on a scale of one to five. The clips included people either interacting with one another, performing side-by-side activities, or conducting independent activities on their own.
The researchers then asked more than 350 AI language, video, and image models to predict how humans would judge the videos and how their brains would respond to watching. For large language models, the researchers had the AIs evaluate short, human-written captions.
Participants, for the most part, agreed with each other on all the questions; the AI models, regardless of size or the data they were trained on, did not. Video models were unable to accurately describe what people were doing in the videos. Even image models that were given a series of still frames to analyze could not reliably predict whether people were communicating. Language models were better at predicting human behavior, while video models were better at predicting neural activity in the brain.
The results provide a sharp contrast to AI’s success in reading still images, the researchers said.
“It’s not enough to just see an image and recognize objects and faces. That was the first step, which took us a long way in AI. But real life isn’t static. We need AI to understand the story that is unfolding in a scene. Understanding the relationships, context, and dynamics of social interactions is the next step, and this research suggests there might be a blind spot in AI model development,” Garcia said.
Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images, which is different from the area of the brain that processes dynamic social scenes.
“There’s a lot of nuances, but the big takeaway is none of the AI models can match human brain and behavior responses to scenes across the board, like they do for static scenes,” Isik said. “I think there’s something fundamental about the way humans are processing scenes that these models are missing.”
Article Title
MODELING DYNAMIC SOCIAL VISION HIGHLIGHTS GAPS BETWEEN DEEP LEARNING AND HUMANS
Article Publication Date
24-Apr-2025
AI provides reliable answers with less computational overhead
A new method makes AI responses increasingly reliable. The algorithm specifically selects data relevant to the question. In addition, even AI models up to 40 times smaller achieve the same output performance as the best large AI models.
ChatGPT and alike often amaze us with the accuracy of their answers, but unfortunately, they also repeatedly give us cause for doubt. The main issue with powerful AI response engines (artificial intelligence) is that they provide us with perfect answers and obvious nonsense with the same ease. One of the major challenges lies in how the large language models (LLMs) underlying AI deal with uncertainty. Until now, it has been very difficult to assess whether LLMs designed for text processing and generation base their responses on a solid foundation of data or whether they are operating on uncertain ground.
Researchers at the Institute for Machine Learning at the Department of Computer Science at ETH Zurich have now developed a method that can be used to specifically reduce the uncertainty of AI. “Our algorithm can enrich the general language model of the AI with additional data from the relevant subject area of a question. In combination with the specific question, we can then extract from the depths of the model and from the enrichment data precisely those connections that are most likely to generate a correct answer,” explains Jonas Hübotter from the Learning & Adaptive Systems Group, who developed the new method as part of his PhD studies.
Enriching AI with specific data
“The method is particularly suitable for companies, scientists or other users who want to use general AI in a specialised field that is only covered partially or not at all by the AI training data,” adds Andreas Krause, head of the research group and Director of the ETH AI Centre.
For example, users can feed their locally stored data into a large language model (LLM), such as Llama. The so-called SIFT algorithm (Selecting Informative data for Fine-Tuning), developed by ETH computer scientists, can then use the additional data provided to select specific information that is most closely related to the question.
Relationship vectors in multidimensional space
The algorithm uses the structure according to which the language information is organised in the AI’s large language model (LLM) to find related information. The models divide the language information in their training data into word parts. The semantic and syntactic relationships between the word parts are then arranged as connecting arrows – known in the field as vectors – in a multidimensional space. The dimensions of space, which can number in the thousands, arise from the relationship parameters that the LLM independently identifies during training using the general data.
Angle between arrows as measure of correlation
Relational arrows pointing in the same direction in this vector space indicate a strong correlation. The larger the angle between two vectors, the less two units of information relate to one another.
The SIFT algorithm developed by ETH researchers now uses the direction of the relationship vector of the input query (prompt) to identify those information relationships that are closely related to the question but at the same time complement each other in terms of content. “The angle between the vectors corresponds to the relevance of the content, and we can use the angles to select specific data that reduces uncertainty,” explains Hübotter.
Less overlap from redundant information
By contrast, the most common method used to date for selecting the information suitable for the answer, known as the nearest neighbour method, tends to accumulate redundant information that is widely available. The difference between the two methods becomes clear when looking at an example of a query prompt that is composed of several pieces of information.
To answer the two-part question “How old is Roger Federer and how many children does he have?”, the nearest neighbour method considers similar information such as “Roger Federer is 43 years old” and “Roger Federer’s birthday is 8 August 1981” to be equally relevant. Information about his children, which is relevant for the second part of the question, is sometimes missing. It is overlaid by birth date information, which occurs much more frequently in the AI training data. The SIFT algorithm, however, takes into account the extent to which the pieces of information included complement each other, i.e. whether the information vectors point in different directions. This allows relevant information to be identified for both aspects of the question.
More reliable answers with much smaller models
However, targeted information selection not only improves the quality of responses. It can also be used to reduce the ever-increasing computing power required by AI applications. By indirectly measuring uncertainty, the model can decide for itself how much more data is needed to provide a sufficiently reliable answer. Consequently, the computational overhead required by an LLM can be systematically adapted to the complexity of the question and the availability of relevant information.
Since SIFT continuously adapts the weighting of the arrow directions to its calculations during data retrieval, the enriched model becomes increasingly reliable the more it is used. This is known as test-time training and can be used to achieve the same output performance with smaller models. “In tests with standard data sets, we used SIFT tuning to outperform even the best current AI models with models up to 40 times smaller,” emphasises Hübotter.
Identifying added value of relevant data
Additional applications for the SIFT algorithm are opening up in terms of data evaluation, as Krause explains: “We can track which enrichment data SIFT selects. They are closely related to the question and therefore particularly relevant to this subject area. This could be used in medicine, for example, to investigate which laboratory analyses or measurement values are significant for a specific diagnosis and which less so.”
Hübotter is currently presenting his approach at the International Conference on Learning Representations (ICLR) in Singapore. In December the ETH researchers won the prize for the Best Scientific Article for their method at the NeurIPS Annual Conference on Neural Information Processing Systems (NeurIPS) in the “Finetuning in Modern Machine Learning” workshop.
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
EFFICIENTLY LEARNING AT TEST-TIME: ACTIVE FINE-TUNING OF LLMS
No comments:
Post a Comment