Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models (Nonconvex Optimizati


After an elementary calculation, subproblem 15 can be simplified as. Then, the solution u k of 10 has the following closed form expression:. Therefore, the solution v k can be computed as:.

Access Check

Variable x in 11 is updated by solving the following problem:. Variable y in 11 is updated by solving the following problem:. A simple computation yields that the solution y k can be computed in closed form as:. Proximal ADM has excellent convergence in practice.

X Brazilian Workshop on Continuous Optimization

The global convergence of ADM for convex problems was given by He and Yuan in [ 31 , 18 ] under the variation inequality framework. However, since our optimization problem in 8 is non-convex, the convergence analysis for ADM needs additional conditions. By imposing some mild conditions, Wen et al. Along a similar line, we establish the convergence property of proximal ADM. Specifically, we have the following convergence result. Convergence of Algorithm 1. Then any accumulation point of sequence satisfies the KKT conditions of 9.

This assumption can be checked by measuring the violation of the equality constraints. Though not satisfactory, it provides some assurance on the convergence of Algorithm 1.

Two reasons explain the good performance of our method. Our method directly handles the complimentary constraints in 9: These constraints are the only sources of non-convexity for the optimization problem and they characterize the optimality of the KKT solution of 6. These special properties of MPEC distinguish it from general nonlinear optimization [ 63 , 64 , 62 , 65 ]. Sparse plus low-rank matrix decomposition [ 52 , 33 ] is becoming a powerful tool that effectively corrects large errors in structured data in the last decade.

It aims at decomposing a given corrupted image B which is of matrix form into its sparse component S and low-rank component L by solving: Here the sparse component represents the foreground of an image which can be treated as outliers or impulse noise, while the low-rank component corresponds to the background, which is highly correlated. This is equivalent to the following optimization problem:. While they consider the low-rank prior in their objective function, we consider the Total Variation TV prior in ours.

The goal of image restoration in the presence of impulse noise has been pursued by a number of authors see, e. Moreover, the matrix K is a square identity or ill-conditioned matrix. Very recently, Yan [ 57 ] proposed the following new model for image restoration in the presence of impulse noise and mixed Gaussian impulse noise:. They further reformulate the problem above into Extra open brace or missing close brace and then solve this problem using an Adaptive Outlier Pursuit AOP algorithm.

The AOP algorithm is actually an alternating minimization method, which separates the minimization problem over u and v into two steps. By iteratively restoring the images and updating the set of damaged pixels, it is shown that AOP algorithm outperforms existing state-of-the-art methods for impulse noise denoising, by a large margin.

For paying the fee and more information on departure and arrival times, please contact the workshop registration desk in the hotel. Moreover, scratches in photos and video sequences can be also viewed as a special type of impulse noise. However, removing this kind of noise is not easy, since corrupted pixels are randomly distributed in the image and the intensities at corrupted pixels are usually indistinguishable from those of their neighbors. In our experiments, we apply the following algorithms:. A preliminary version of this paper appeared in [ 61 ]. They further reformulate the problem above into Extra open brace or missing close brace and then solve this problem using an Adaptive Outlier Pursuit AOP algorithm.

Despite the merits of the AOP algorithm, we must point out that it incurs three drawbacks, which are unappealing in practice. First, the formulation in 17 is only suitable for mixed Gaussian impulse noise, i.

  1. ;
  2. His Unknown Heir (Mills & Boon Modern).
  3. IMPA - X Brazilian Workshop on Continuous Optimization.
  4. .

Since the minimization sub-problem over u 5 needs to be solved exactly in each stage, the algorithm may suffer from slow convergence. Specifically, i as have been analyzed in Section 2, i. Thus, our method is expected to produce higher quality image restorations, as seen in our results. However, existing solutions are not appealing. The simple projection methods are inapplicable to our model since they assume the objective function is smooth. First, it involves an additional hyper-parameter p which may not be appealing in practice. This includes the iterative re-weighted least square method [ 36 ] and proximal point method.

Recently, Lu et al. Nevertheless, as observed in our preliminary experiments and theirs, the practical performance of direct ADM is worse than that of PDA. Actually, in our experiments, we found PDA is unstable. In our experiments, we apply the following algorithms:. The enhancement of the sparsity is achieved by grouping similar 2D image blocks into 3D data arrays [ 22 ]. We utilize adaptive median filtering to remove salt-and-pepper impulse noise and adaptive center-weighted median filtering to remove random-valued impulse noise. We use this convex optimization method as our baseline implementation.

We use the implementation provided by the author. We set the relaxation parameter to 1. For the denoising and deblurring test, we use the following strategies to generate artificial noisy images. Although blurring kernel estimation has been pursued by many studies e.

We run all the previously mentioned algorithms on the generated noisy and blurry images. Since the corrupted pixels follow a Bernoulli-like distribution, it is generally hard to measure the data fidelity between the original images and the recovered images. Therefore, we consider three ways to measure SNR.

Abstract Total Variation TV is an effective and popular prior model in the field of regularization-based image processing. Mathematically, this problem is formulated as: Therefore, image restoration can be modelled globally as the following optimization problem: Laplace noise [ 58 , 21 ] Extra open brace or missing close brace add. A certain percentage of pixels are altered to be either u min or u max: Taking the negative logarithm of the above equation, the estimate is a solution of the following minimization problem: In order to make use of more prior information, we consider the following box-constrained model: Then, the solution u k of 10 has the following closed form expression: Therefore, the solution v k can be computed as: Variable x in 11 is updated by solving the following problem: A simple computation yields that the solution y k can be computed in closed form as: Please refer to Appendix A.

This is equivalent to the following optimization problem: Very recently, Yan [ 57 ] proposed the following new model for image restoration in the presence of impulse noise and mixed Gaussian impulse noise: In our experiments, we apply the following algorithms: Gaussian noise [ 45 , 12 ]. Jonathan Eckstein, Rutgers University Multilevel optimization modeling for stochastic programming with coherent risk measures. The Inexact Chebyshev-Halley tensor free class. Vincent Guigues, FGV Risk-averse mirror descent for convex and uniformly convex stochastic programs with applications to hypotheses testing of risk measures.

A prerequsite for a VU-algorithm. Maria Daniela Sanchez, Universidad Nac. Hugo Scolnik , Univ. Sicre UFBA A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities. Michel Thera, Universite Limoges Estimate for fixed points of composition of multifunctions and application to the global version of the Lusternik-Graves Theorem. Cristina Vilma Turner, Univ. The workshop has a limited number of grants to cover local expenses of students and young researchers.

Applications must be submitted until February 17th, and the result will be released on February 27th.

Students must also submit a short curriculum vitae and a recommendation letter from the advisor or equivalent. The letter must be sent directly by the recommender to eventos impa. Financial support will be given just to students and Young researchers that will make a presentation during the workshop. Renato Monteiro — Georgia Institute of Technology, USA Accelerating block-decomposition first-order methods for solving generalized saddle-point and Nash equilibrium problems.

Due to the size of the poster boards your poster may not exceed 90 cm in width and cm in height portrait format. Materials for mounting the posters will be available at the poster areas.