Can artificial intelligence allow computers to ensure safe autonomous systems and advance optimization? Two Princeton professors believe it can, and they received a research grant earlier this year to explore AI enhancement in mathematical proof systems used to ensure safety and efficiency in complex technologies.
Professor of Operations Research and Financial Engineering Amir Ali Ahmadi and Professor of Computer Science Pravesh Kothari received a Princeton AI Lab Seed Grant in February for their research project “AI-Assisted Algebraic Proof Systems with Engineering Applications.”
The goal of this collaborative endeavor is to leverage application-specific data along with AI tools to scale up semidefinite programming-based algebraic proof systems. Semidefinite programming is a powerful method used to solve complex optimization problems — especially those involving uncertainty or safety — which are often too complicated for regular algorithms. These proof systems help rigorously confirm whether certain mathematical conditions hold true, which is critical in engineering applications where failure is not an option.
Their work will focus on key applications, including the safety verification of robotic systems, automated theorem proving in geometry, and algorithmic robust statistics. Safety verification involves proving that a system, like a drone or a self-driving car, will always behave safely under all conditions. Automated theorem proving uses computers to logically prove mathematical statements, and robust statistics deals with analyzing data in a way that resists being misled by outliers or errors.
“This collaboration had been on our minds for a while,” Ahmadi told The Daily Princetonian. “We are working on sum of squares optimization, [Kothari] from the theoretical computer science side and me from the optimization and control side.”
Sum of squares (sos) optimization is a cutting-edge mathematical technique that allows researchers to verify that systems governed by polynomial equations behave as intended. Essentially, it helps prove whether certain outcomes are impossible or guaranteed, something that’s especially important in areas like robotics and control theory, where safety must be mathematically assured. For example, to ensure that an autonomous vehicle avoids obstacles, sos can be used to prove that the vehicle’s path will never intersect with any object in its environment.
Despite its theoretical power, sos often runs into computational limits, as it relies on semidefinite programming, which tends to become intractable at larger scales. In this case, intractable means the calculations become so large and complex that even powerful computers struggle or fail to finish them in a reasonable amount of time. This is where machine learning enters the equation: Ahmadi and Kothari are using AI tools trained on past data to help guide and speed up the process of finding these mathematical proofs.
“If I already have a dataset of infeasible polynomial systems and their algebraic proofs of infeasibility, can I use that to shortcut the next proof?” Ahmadi asked. In other words, if the system has previously encountered problems that were proven to have no solution, can AI learn from those examples to more quickly identify or rule out similar problems in the future? “That’s our vision to guide future computation.”
Although the grant funding is relatively modest, Ahmadi noted that it still spurred the project along. Additional support from the School of Engineering and Applied Sciences Innovation Fund will allow the team to involve more students and lay the groundwork for the project.
The first project the team is tackling involves interpreting sos algorithms for finding the coloring number of a graph. In simple terms, they want to find the minimum number of colors needed to color the nodes (or points) of a network so that connected nodes don’t share the same color.
“This problem has many applications,” Ahmadi explained. “For example, if the nodes of the graph represent courses at Princeton and the edges represent conflicts in these courses (they meet at the same time), then the coloring number is exactly the minimum number of classrooms needed to hold all courses without conflict.”
Ahmadi continued with a second, more lighthearted, application. “If the same nodes represent guests at your wedding and the edges represent interpersonal conflicts (guests who can’t stand each other), then the coloring number corresponds to the minimum number of dinner tables needed to ensure a drama-free wedding!”

“Just as many AI tools like those based on neural networks act as ‘black boxes’” — systems that work well but don’t easily reveal how they arrived at a particular answer — “sos algorithms are powerful but can also be mysterious,” Ahmadi told the ‘Prince.’
“There is a growing interest in the machine learning community to replace them with more interpretable models. Our goal is to understand how these algorithms solve difficult combinatorial problems for which no alternative, more interpretable algorithms currently exist,” he said.
In the long run, Ahmadi and Kothari hope their work will contribute to one of the most ambitious frontiers in artificial intelligence: automated mathematical reasoning.
“One of the big-picture goals of this project is automated theorem proving,” Ahmadi said. “Can a machine actually prove deep, non-trivial theorems on its own — and do it at scale? That’s a grand challenge, and sum of squares optimization gives us one structured path toward it.”
“It is a deeply interdisciplinary effort,” he added. “And it is just getting started.”
Adam Moussa is a contributing Research writer for the ‘Prince.’
Please send any corrections to corrections[at]dailyprincetonian.com.