Monday, June 10, 2024

 

Advanced AI-based techniques scale-up solving complex combinatorial optimization problems





UNIVERSITY OF CALIFORNIA - SAN DIEGO




A framework based on advanced AI techniques can solve complex, computationally intensive problems faster and in a more more scalable way than state-of-the-art methods, according to a study led by engineers at the University of California San Diego. 

In the paper, which was published May 30 in Nature Machine Intelligence, researchers present HypOp, a framework that uses unsupervised learning and hypergraph neural networks. The framework is able to solve combinatorial optimization problems significantly faster than existing methods. HypOp is also able to solve certain combinatorial problems that can’t be solved as effectively by prior methods. 

“In this paper, we tackle the difficult task of addressing combinatorial optimization problems that are paramount in many fields of science and engineering,” said Nasimeh Heydaribeni, the paper’s corresponding author and a postdoctoral scholar in the UC San Diego Department of Electrical and Computer Engineering. She is part of the research group of Professor Farinaz Koushanfar, who co-directs the Center for Machine-Intelligence, Computing and Security at the UC San Diego Jacobs School of Engineering. Professor Tina Eliassi-Rad from Northeastern University also collaborated with the UC San Diego team on this project.

One example of a relatively simple combinatorial problem is figuring out how many and what kind of goods to stock at specific warehouses in order to consume the least amount of gas when delivering these goods. 

HypOp can be applied to a broad spectrum of challenging real-world problems, with applications in drug discovery, chip design, logic verification, logistics and more. These are all combinatorial problems with a wide range of variables and constraints that make them extremely difficult to solve. That is because in these problems, the size of the underlying search space for finding potential solutions increases exponentially rather than in a linear fashion with respect to the problem size. 

HypOp can solve these complex problems in a more scalable manner by using a new distributed algorithm that allows multiple computation units on the hypergraph to solve the problem together, in parallel, more efficiently. 

HypOp introduces new problem embedding leveraging hypergraph neural networks, which have higher order connections than traditional graph neural networks, to better model the problem constraints and solve them more proficiently. HypOp also can transfer learning from one problem to help solve other, seemingly different problems more effectively. HypOp includes an additional fine-tuning step, which leads to finding more accurate solutions than the prior existing methods. 

This research was funded in part by the Department of Defense and Army Research Office funded MURI AutoCombat project and the NSF-funded TILOS AI Institute. 

Distributed Constrained Combinatorial Optimization Leveraging Hypergraph Neural Networks

Nasimeh Heydaribeni, Xinrui Zhan, Ruisi Zhang and Farinaz Koushanfar, UC San Diego Department of Electrical and Computer Engineering

Tina Eliassi-Rad, Khoury College of Computer Sciences, Northeastern University

 

The code for HypOp is available here


 

No comments:

Post a Comment