Documents

Combinatorial Optimization has wide applications throughout many industries. However, in many real-life applications, some or all coefficients of the optimization problem are not known at the time of execution. In such applications, those coefficients are estimated using machine learning (ML) models. End-to-end predict-and-optimize approaches which train the ML model taking the optimization task into consideration, have received increasing attention. In case of mixed integer linear program (MILP), previous work suggested adding a quadratic regularizer term after relaxing the MILP and differentiate the KKT conditions to facilitate gradient-based learning. In this work, we propose to differentiate the homogeneous self-dual formulation of the relaxed LP, which contains more number of parameters, instead of the KKT conditions. Moreover, as our formulation contains a log-barrier term, we need not add a quadratic term to make the formulation differentiable
Original languageEnglish
Number of pages8
Publication statusPublished - 7 Sep 2020
EventDoctoral Program of the 26th International Conference on Principles and Practice of Constraint Programming
- Online
Duration: 7 Sep 202011 Sep 2020
https://cp2020.a4cp.org/callfordp.html

Conference

ConferenceDoctoral Program of the 26th International Conference on Principles and Practice of Constraint Programming
Period7/09/2011/09/20
Internet address

    Research areas

  • data-driven optimization, interior point method, neural networks

ID: 53707773