DAS Research presents GENO for GPUs at AAAI

GENO makes mathematical optimization easily accessible: The desired optimization problem is specified in a natural modeling language and the associated solver is generated simply by pressing a button. In machine learning, it was previously common to write a new solver for each problem. GENO has thus reduced the development effort from days or even weeks to minutes.

Internally, GENO takes the specification of the given optimization problem and transforms it into the solver software in several steps. The algorithm implemented in the solver is a so-called L-BFGS-B quasi-Newton method. Unfortunately, this method cannot benefit from the massive parallelism of modern GPUs (Graphics Processing Units), since it contains an inherently sequential step, the so-called Cauchy point computation.

We have developed a variant of the L-BFGS-B algorithm that avoids the Cauchy point computation, making it efficiently parallelizable. We were able to theoretically prove the convergence of the algorithm. In practice, the new variant is significantly faster than the old multicore CPU version on various machine learning benchmark problems. We will present the paper describing the new algorithm at this year’s Association for the Advancement of Artificial Intelligence (AAAI) conference.

Publication: S. Laue, M. Blacher and J. Giesen. Optimization for Classical Machine Learning Problems on the GPU. Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI)