Sunday 15 January 2012

c++ - How to solve large-scale nonlinear optimization problems with Ceres? -



c++ - How to solve large-scale nonlinear optimization problems with Ceres? -

i need optimize surface represented 2d grid of points produce normal vectors of surface align provided target normal vectors. grid size between 201x201 , 1001x1001. means number of variables 40,000 1,000,000, modifying z-coordinates of mesh points.

i using ceres framework, supposed excel @ large-scale nonlinear optimization problems. tried matlab's fmincon, uses incredible amount of memory. wrote objective function works little meshes (successful @ 3x3 , 31x31). however, when seek compile code big mesh size (157x200), see error below. have read limitation of eigen. however, when tell ceres utilize lapack instead of eigen, same error big matrices. tried these lines:

options.dense_linear_algebra_library_type = ceres::lapack; options.linear_solver_type = ceres::dense_qr;

these tell solver utilize lapack , dense_qr, output using 3x3 mesh shows:

minimizer trust_region dense linear algebra library lapack trust part strategy levenberg_marquardt given used linear solver dense_qr dense_qr threads 1 1 linear solver threads 1 1

however, when utilize big parameters, still errors eigen.

anyways, utilize help this. how can ceres optimize big number of variables (> 30,000)? in advance

link ceres: http://ceres-solver.org

link eigen: http://eigen.tuxfamily.org/dox/

error:

in file included /usr/include/eigen3/eigen/core:254:0, /usr/local/include/ceres/jet.h:165, /usr/local/include/ceres/internal/autodiff.h:145, /usr/local/include/ceres/autodiff_cost_function.h:132, /usr/local/include/ceres/ceres.h:37, /home/ubuntu/code/surfaceopt/surfaceopt.cc:10: /usr/include/eigen3/eigen/src/core/densestorage.h: in instantiation of ‘eigen::internal::plain_array<t, size, matrixorarrayoptions, alignment>::plain_array() [with t = double; int size = 31400; int matrixorarrayoptions = 2; int alignment = 0]’: /usr/include/eigen3/eigen/src/core/densestorage.h:117:27: required ‘eigen::densestorage<t, size, _rows, _cols, _options>::densestorage() [with t = double; int size = 31400; int _rows = 31400; int _cols = 1; int _options = 2]’ /usr/include/eigen3/eigen/src/core/plainobjectbase.h:421:55: required ‘eigen::plainobjectbase<derived>::plainobjectbase() [with derived = eigen::matrix<double, 31400, 1, 2, 31400, 1>]’ /usr/include/eigen3/eigen/src/core/matrix.h:203:41: required ‘eigen::matrix<_scalar, _rows, _cols, _options, _maxrows, _maxcols>::matrix() [with _scalar = double; int _rows = 31400; int _cols = 1; int _options = 2; int _maxrows = 31400; int _maxcols = 1]’ /usr/local/include/ceres/jet.h:179:13: required ‘ceres::jet<t, n>::jet() [with t = double; int n = 31400]’ /usr/local/include/ceres/internal/fixed_array.h:138:10: required ‘ceres::internal::fixedarray<t, inline_elements>::fixedarray(ceres::internal::fixedarray<t, inline_elements>::size_type) [with t = ceres::jet<double, 31400>; long int inline_elements = 0l; ceres::internal::fixedarray<t, inline_elements>::size_type = long unsigned int]’ /usr/local/include/ceres/internal/autodiff.h:233:70: required ‘static bool ceres::internal::autodiff<functor, t, n0, n1, n2, n3, n4, n5, n6, n7, n8, n9>::differentiate(const functor&, const t* const*, int, t*, t**) [with functor = computeeint; t = double; int n0 = 31400; int n1 = 0; int n2 = 0; int n3 = 0; int n4 = 0; int n5 = 0; int n6 = 0; int n7 = 0; int n8 = 0; int n9 = 0]’ /usr/local/include/ceres/autodiff_cost_function.h:218:25: required ‘bool ceres::autodiffcostfunction<costfunctor, knumresiduals, n0, n1, n2, n3, n4, n5, n6, n7, n8, n9>::evaluate(const double* const*, double*, double**) const [with costfunctor = computeeint; int knumresiduals = 1; int n0 = 31400; int n1 = 0; int n2 = 0; int n3 = 0; int n4 = 0; int n5 = 0; int n6 = 0; int n7 = 0; int n8 = 0; int n9 = 0]’ /home/ubuntu/code/surfaceopt/surfaceopt.cc:367:1: required here /usr/include/eigen3/eigen/src/core/densestorage.h:41:5: error: ‘object_allocated_on_stack_is_too_big’ not fellow member of ‘eigen::internal::static_assertion<false>’ eigen_static_assert(size * sizeof(t) <= 128 * 128 * 8, object_allocated_on_stack_is_too_big); ^ /usr/include/eigen3/eigen/src/core/densestorage.h: in instantiation of ‘eigen::internal::plain_array<t, size, matrixorarrayoptions, 16>::plain_array() [with t = double; int size = 31400; int matrixorarrayoptions = 1]’: /usr/include/eigen3/eigen/src/core/densestorage.h:120:59: required ‘eigen::densestorage<t, size, _rows, _cols, _options>::densestorage(eigen::denseindex, eigen::denseindex, eigen::denseindex) [with t = double; int size = 31400; int _rows = 1; int _cols = 31400; int _options = 1; eigen::denseindex = long int]’ /usr/include/eigen3/eigen/src/core/plainobjectbase.h:438:41: required ‘eigen::plainobjectbase<derived>::plainobjectbase(eigen::plainobjectbase<derived>::index, eigen::plainobjectbase<derived>::index, eigen::plainobjectbase<derived>::index) [with derived = eigen::matrix<double, 1, 31400, 1, 1, 31400>; eigen::plainobjectbase<derived>::index = long int]’ /usr/include/eigen3/eigen/src/core/matrix.h:273:76: required ‘eigen::matrix<_scalar, _rows, _cols, _options, _maxrows, _maxcols>::matrix(const eigen::matrixbase<otherderived>&) [with otherderived = eigen::transpose<const eigen::matrix<double, 31400, 1, 2, 31400, 1> >; _scalar = double; int _rows = 1; int _cols = 31400; int _options = 1; int _maxrows = 1; int _maxcols = 31400]’ /usr/include/eigen3/eigen/src/core/densebase.h:367:62: required ‘eigen::densebase<derived>::evalreturntype eigen::densebase<derived>::eval() const [with derived = eigen::transpose<const eigen::matrix<double, 31400, 1, 2, 31400, 1> >; eigen::densebase<derived>::evalreturntype = const eigen::matrix<double, 1, 31400, 1, 1, 31400>]’ /usr/include/eigen3/eigen/src/core/io.h:244:69: required ‘std::ostream& eigen::operator<<(std::ostream&, const eigen::densebase<derived>&) [with derived = eigen::transpose<const eigen::matrix<double, 31400, 1, 2, 31400, 1> >; std::ostream = std::basic_ostream<char>]’ /usr/local/include/ceres/jet.h:632:35: required ‘std::ostream& ceres::operator<<(std::ostream&, const ceres::jet<t, n>&) [with t = double; int n = 31400; std::ostream = std::basic_ostream<char>]’ /home/ubuntu/code/surfaceopt/surfaceopt.cc:103:50: required ‘bool computeeint::operator()(const t*, t*) const [with t = ceres::jet<double, 31400>]’ /usr/local/include/ceres/internal/variadic_evaluate.h:175:26: required ‘static bool ceres::internal::variadicevaluate<functor, t, n0, 0, 0, 0, 0, 0, 0, 0, 0, 0>::call(const functor&, const t* const*, t*) [with functor = computeeint; t = ceres::jet<double, 31400>; int n0 = 31400]’ /usr/local/include/ceres/internal/autodiff.h:283:45: required ‘static bool ceres::internal::autodiff<functor, t, n0, n1, n2, n3, n4, n5, n6, n7, n8, n9>::differentiate(const functor&, const t* const*, int, t*, t**) [with functor = computeeint; t = double; int n0 = 31400; int n1 = 0; int n2 = 0; int n3 = 0; int n4 = 0; int n5 = 0; int n6 = 0; int n7 = 0; int n8 = 0; int n9 = 0]’ /usr/local/include/ceres/autodiff_cost_function.h:218:25: required ‘bool ceres::autodiffcostfunction<costfunctor, knumresiduals, n0, n1, n2, n3, n4, n5, n6, n7, n8, n9>::evaluate(const double* const*, double*, double**) const [with costfunctor = computeeint; int knumresiduals = 1; int n0 = 31400; int n1 = 0; int n2 = 0; int n3 = 0; int n4 = 0; int n5 = 0; int n6 = 0; int n7 = 0; int n8 = 0; int n9 = 0]’ /home/ubuntu/code/surfaceopt/surfaceopt.cc:367:1: required here /usr/include/eigen3/eigen/src/core/densestorage.h:79:5: error: ‘object_allocated_on_stack_is_too_big’ not fellow member of ‘eigen::internal::static_assertion<false>’ eigen_static_assert(size * sizeof(t) <= 128 * 128 * 8, object_allocated_on_stack_is_too_big); ^ make[2]: *** [cmakefiles/surfaceopt.dir/surfaceopt.cc.o] error 1 make[1]: *** [cmakefiles/surfaceopt.dir/all] error 2 make: *** [all] error 2

my code looks (abbreviated take out irrelevant material):

#define text true #define verbose false #define nv 31400 #define nf 62088 #define nx 157 #define ny 200 #define maxnb 6 #include <math.h> #include <ceres/ceres.h> #include <ceres/rotation.h> #include "glog/logging.h" #include <iostream> #include <fstream> #include <iterator> #include <algorithm> #include <string> using ceres::autodiffcostfunction; using ceres::costfunction; using ceres::problem; using ceres::solver; using ceres::solve; using ceres::crossproduct; ... class computeeint { private: double xy_ [nv][2]; // x , y coords int c_ [nf][3]; // connectivity list int af_ [nv][maxnb]; // list of adjacent faces each vertex double ntgt_ [nv][3]; // target normal vectors int num_af_ [nv]; // number of adjacent faces each vertex public: //constructor computeeint(double xy[][2], int c[][3], int af[][maxnb], double ntgt[][3], int num_af[nv]) { std::copy(&xy[0][0], &xy[0][0]+nv*2,&xy_[0][0]); ... template <typename t> bool operator()(const t* const z, t* e) const { e[0] = t(0); ... //computes vertex normals triangulated surface averaging adjacent face normals ... e[0] = e[0]/t(nv); homecoming true; } }; int main(int argc, char** argv) { google::initgooglelogging(argv[0]); double tp [nv][3]; //points in mesh int tc [nf][3]; //mesh connectivity list double ntgt [nv][3]; //target normals int af [nv][maxnb]; //list of adjacent faces of each vertex int num_af [nv]; //number of adjacent faces each vertex int nx = nx; int ny = ny; //read tp, tc, ntgt, af, num_af file ... // set xy cost functor double xy [nv][2]; double z [nv]; //copy first 2 columns of tp xy problem problem; // set cost function (also known residual). uses // auto-differentiation obtain derivative (jacobian). costfunction* cost_function = new autodiffcostfunction<computeeint, 1, nv>(new computeeint(xy, tc, af, ntgt, num_af)); std::cout << "created cost function" << "\n"; problem.addresidualblock(cost_function, null, &z[0]); std::cout << "added residual block" << "\n"; // run solver! solver::options options; options.minimizer_progress_to_stdout = true; options.max_num_iterations = 50; options.function_tolerance = 1e-4; options.dense_linear_algebra_library_type = ceres::lapack; solver::summary summary; solve(options, &problem, &summary); std::cout << summary.fullreport() << "\n"; //write output of optimization file ... homecoming 0; }

two things

you using dense_qr linear solver, results in dense jacobian. bad idea. alter linear solver sparse_normal_cholesky , should able solve problems of size quite easily.

you going need suitespare/cxsparse if using version 1.9 or older. if utilize latest release candidate or git version, should able utilize eigen sparse linear algebra too.

you creating single cost function whole problem. means not exposing sparsity problem. causing stack allocation problem since automatic differentiation stuff involves info on stack.

have @ illustration code ships ceres, illustration denoising.cc denoises entire image, , has similar grid structure.

more create single residual block each vertex in problem.

c++ optimization eigen lapack large-scale

No comments:

Post a Comment