steepest descent backtracking line search matlab Compute Newton step : Use exact or backtracking line search to scale the step 4. until stopping criterion is satisﬁed. Taking a shorter step, as you do when removing the fminbnd line search, has a chance of landing you somewhere where the gradient points more directly toward the global minimum, thus speeding convergence. At any kth iteration the next point is given by the sum of old point and the Review: Globalization/Line Search and the Steepest-Descent Method Damped Newton Line search with Newton’s method is called Damped Newton Backtracking line search: I Start with α = 1 I Check the Armijo condition I If needed, reduce α until the Armijo condition is satisﬁed. m files . 363] applies to a subgradient method in which the search direction is de ned by the steepest descent direction, i. 2. Use them to minimize the Rosenbrock function (2. Section II introduces basic notation, notions of graph theory, and line search. The Zou-tendijk condition is a powerful tool for analyzing the convergence of line search methods. backtracking line search . sce (in Scilab) or numericaltour. • Use µ= 1 4 to deﬁne the suﬃcient-decrease criterion in the backtracking algorithm. 1 Exact line search for quadratic functions. Use them to minimize the Rosenbrock function (2. argmin rf(x)> s. the path of steepest 9,1) and uses exact line search, then {xt}are all diﬀerentiable points xt→(0,0) as t→∞ •even though it never hits non-diﬀerentiable points, steepest descent with exact line search gets stuck around a non-optimal point (i. , gT k pk < 0 if gk 6= 0 so that, for small steps along pk, the objective function will be reduced calculate a suitable steplength k > 0 so that f(xk + kpk) < fk computation of k is the linesearch|may itself be an iteration generic linesearch method 3. The steepest descent method [2 3] is a line search techniqu e. m . 1 Program the steepest descent and Newton algorithms using the backtracking line search, Algorithm 3. 4 and 3. Note thatd¯=−∇f(¯x) is a descent direction as long as∇f(¯x)=0. 2. A Newton's Method top. (See [CZ13]. Now, let us take another initial point and see how is of behaviourSteepest Descent Method with Backtracking Line search to Rosen Brock function. 63 KB) by Arshad Afzal Minimizing the Cost function (mean-square error) using SGD Algorithm Program the steepest descent and Newton algorithms using the backtracking line search, Algorithm 3. 4 Steepest Descent Method The steepest descent method uses the gradient vector at x k as the search direction for the major iteration k. MATLAB files needed for homework: fpiseq. The algorithm works with any quadratic function (Degree 2) with two variables (X and Y). Line search procedure is a useful and efficient technique for solving unconstrained optimization problems, especially for solving small and middle scale problems, such With backtracking line search, we start with an initial step size,alpha, and we check if x+alpha*d is an acceptable point. 5 Convergence of line search methods. Logic bug in getGradient when using problem. Training a multilayer perceptron with the Matlab Neural Networks Toolbox, click here. (and much simpler) • clearly shows two phases in algorithm Unconstrained minimization 22 MATLAB lab for M529 , and Introduction to MATLAB by exercises. We shall see in depth about these different types of Gradient Descent in further posts. Convergence – We didn’t talk about how to determine when the search finds a solution. Taking a shorter step, as you do when removing the fminbnd line search, has a chance of landing you somewhere where the gradient points more directly toward the global minimum, thus speeding convergence. Update: 𝑥≔𝑥 E𝑡Δ𝑥 q b. To see this, simply observe thatd¯T∇f(¯x)=−(∇f(¯x))T∇f(¯x)<0solongas ∇f(¯x)=0. Steepest descent directions are orthogonal to each other. I thought the point of the backtracking line search was to find me an optimal value $\alpha_k$ such that I get to the minimum. 3 Wolfe Conditions. Steepest descent gradient method for on-line training a multilayer perceptron, click here. which takes a step along the d irection p k = Steepest descent Backtracking parameter 0. Set the initial step length α 0 = 1 and print the step length used by each method at each iteration. m %In this script we apply steepest descent with the %backtracking linesearch to minimize the 2-D %Rosenbrock function starting at the point x=(-1. 2), however, although the function itself has a unique solution at (1,1), I'm getting (-inf,inf) as an optimal solution. 22). 4. which takes a step along the d irection p k = Steepest descent Backtracking parameter 0. grad with a different number of inputs compared to problem. Line search. 1. 1Linear steepest descent Steepest Descent is a gradient descent algorithm to solve a system of linear equations Ax= bgiven by a matrix A2Rn n and a vector b2Rn. Most of the functions run as script on toy problems. Reduction parameter 0. Then, do the same, with the bisection method line search, with = 10 4. Update: $$x \leftarrow x + t \Delta x_{sd}$$ Steepest Descent with Line Search: We perform some kind of linear search to identify an appropriate value for a, rather than simply xing the step-length parameter a. 01, β = 0. repeat 1. 5. The minimization must be terminated when either kg(x k)k 10 5 or 75 iterations are performed. Refer comments for all the important steps in the code to understand the method. m) to implement the method of gradient descent. I Na¨ıve approach: use basis or steepest descent directions F Very ineﬃcient in worse case I Try new directions, keep good ones: Powell’s method or conjugate gradients I Use Newton or quasi-Newton direction F Generally fastest method 2 Do univariate minimization along that direction, this step is called a “line search” Gradient Descent Backpropagation The batch steepest descent training function is traingd. 2. 5 3 Steepest Descent Steepest descent • The zig-zag behaviour is clear in the zoomed view (100 iterations) • The algorithm crawls down the valley • The 1D line minimization must be performed using one of the earlier methods (usually cubic polynomial interpolation) Conjugate Gradients Also, we set the accuracy , and use the backtracking line search for finding proper step-length, then the following figures shows the convergence process of the three algorithms. This gives you the steepest descent Go to line L; Copy path % Program the steepest descent and Newton algorithms using the backtracking % line search, Algorithm 3. CCO-10/11: P-002, Section-2: Unconstrained Minimization: Backtracking Line Search & Gradient Descent. Set the initial step length a1 and print the step length used by each method at each iteration. a known metric) Conjugate gradient (requires line search) Rprop (heuristic, but quite efﬁcient) 2:1 Gradient descent Notation: Steepest descent is a line search method that moves along the downhill direction. The same as gradient descent, but with a different descent direction: Start with a point $$x$$ Repeat $$\Delta x \leftarrow \Delta x_{sd}$$. This motivates the Armijo rule. 01, V = 0. ∆x:= −∇f(x). Example 2. . 1 backtracking line search exact line search Well, steepest descent is known to be slow, which is why nobody ever uses it, except as a textbook example. Write a program, in MATLAB, Matlab or another language you know, to perform optimization of a 2D polynomial function{x, y) using the method of steepest descent or ascent. naturalreaders. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method . 3 using steepest descent with backtracking line search (Algorithm 3. Choose a step size t > 0. Outputs the iteration sequence. optimize contains implementations of various optimization algorithms, including several line search methods. Also There are different types of Gradient Descent as well. Could you explain why you wish to use steepest descent and not any other method? it might shed some light on your actual problem – amit Dec 20 '11 at 9:07 steepest-descent direction. For most descent methods, the optimal point is not required in the line search. We need to show that the backtracking line search is well-de ned Programming the Backtracking Algorithm Pseudo-Matlab code: 2 6 6 6 6 6 6 1. For the theory any good book on optimization techniques can be consulted. Observe that the descent direction depends not only on the point , but also on the norm . For example residuals we used in the steepest descent will do nicely, because we selected each next residual to be orthogonal to the previous search directions (Equation 39). Descent with line search Matlab file. To do so, cre- ate the functionsfuncRosenbrock,gradRosenbrockandalphabacktracking, and create a newmainfile. 3) at every iteration of the gradient or steepest descent algorithms may be difﬁcult and costly. You can either implement your own line-search (preferred), or use mine. Batch Gradient Descent Stochastic Gradient Descent Mini Batch Gradient Descent. 2. CG; L-BFGS, nesterov's accelerated gradient descent or just plain simple gradient direction) and the occasionally do a line search in the direction of -gradient (and it takes a huge step, sometimes even 10 5 Secondly, we provide a theorem which we recently proved that provides the multiobjective steepest descent search direction for an arbitrary number of objective functions. This is algorithm 9. The header of the function should be Gradient descent 1. 1. Suppose we start at x(0). We will create an arbitrary loss function and attempt to find a local gradient descent with backtracking line search, with = 10 4, 1= 2 5, = 2, and initial point w= ~0. Corrected logic in plotting step of example low_rank_dist_completion - Line search methods, in particular Backtracking line search - Exact line search Normalized steepest descent Newton steps Fundamental problem of the method: local minima Problem 2: (0) ping pong effect -0. The descent direction is defined in two steps. Once the step size is found, I will implement a gradient descent algorithm – RocketSocks22 Sep 6 '18 at 13:17 by step size you mean t value ? Some Matlab code to plot contours of functions, steepest descent, backtracking line search, convergence order estimation, numerical gradient and Hessian, etc. Choice of Norm for Steepest Descent - steepest descent with backtracking line search for two norms P 1 and P 2-ellipsesshow{x | kx x(k)k P =1} - choice of P has strong e↵ect on speed of convergence; optimist vs. WE do this by using the Armijo-Goldstein Condition. The path of steepest descent requires the direction to be opposite of the sign of the coe cient. It is a low complexity and low storage method. Use them to minimize the Rosenbrock's function (2. m optimizes a general multi variable real valued function using steepest descent method. In particular, the module pro-vides a useful routine for calculating a step size satisfying the Wolfe Conditions described above, which is more robust and e cient than our simple backtracking Steepest descent Newtont Quasi-Newton (bfgs) Gauss-Newton using a line search method and the justify my decision. Next, in order to get good good estimates of the optimal point w and function aluev p = f(w) (which you Steepest descent is a line search method that moves along the downhill direction. Given 0 0 and ; 2(0;1), set k:= 0 i for the smallest integer isuch that f x(k+1) f x(k) (1 2 k rf xk) 2 2: (9) Figure4shows the result of applying gradient descent with a backtracking line search to the same example as in Figure3. The weights and biases are updated in the direction of the negative gradient of the performance function. m Fixed Point Iteration. 5 1 1. 1: The method of Steepest Descent approaches the minimum in a zig-zag manner, where the new search direction is orthogonal to the previous. http://www. The method of Steepest Descent is the simplest of the gradient methods. Next, the Goldstein e-subgradient is approximated by an iterative method and a descent direction is computed using a positive definite matrix. As a novel application of this method, we consider the set of nonlinear constraints as the third objective where the first two objectives are the maximization of the life In mathematics, the method of steepest descent or stationary-phase method or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. Since the function is quadratic, its restriction to any line is quadratic, and therefore the line search on any line can be implemented using Newton's method. Need for a sufficient decrease condition. This condition ensures that the step size we take makes our function decrease. Used some Matlab code as described in the book “Applied Numerical Analysis Matlab code for Wolfe line search method. Steepest Descent Direction The gradient rf(x) is sometimes called steepest descent direction Is it really? Here is a possible deﬁnition: The steepest descent direction is the one where,when I make a step of length 1, I get the largest decrease of fin its linear approximation. The weights and biases are updated in the direction of the negative gradient of the performance function. This scheme however requires that one Backtracking: backtracking line search has roughly the same cost, both use O(n) ops per inner backtracking step Conditioning: Newton’s method is not a ected by a problem’s conditioning, but gradient descent can seriously degrade Fragility: Newton’s method may be empirically more sensitive to bugs/numerical errors, gradient descent is more Exact Line Search. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University September 17, 2020 1/106 Outline 1 Generic Linesearch Framework 2 Computing a descent direction p k (search direction) Steepest descent direction Modiﬁed Newton direction For the steepest descent algorithm with exact line search, we have starting from any (This is called global convergence. Examples of MATLAB . In (unconstrained) optimization, the backtracking linesearch strategy is used as part of a line search method, to compute how far one should move along a given search direction. This page describes gradient descent with exact line search for a quadratic function of multiple variables. If you want to train a network using batch steepest descent, you should set the network trainFcn to traingd, and then call the function train. Search Directions¶ The simplest search direction provides the gradient descent algorithm. The script steepestdescent. Steepest descent: Then, line search or backtracking line search xi 1=xi I am trying to get some advice in executing a code that I found on a book "Numerical Methods using Matlab". 1). The quasi-Newton algorithms can attain a superlinear rate of convergence, being superior to the steepest descent or coordinate descent methods. Test your routine by writing a Matlab program SteepDescentLS. So write it down in pseudo-code first. ) • For the steepest descent algorithm with a fixed step size, we have global convergence if and only if the step size satisfies: where denotes the maximum eigenvalue of • Theorem. 5 2 • backtracking parameters α = 0. This is typically done by In case of multiple variables (x,y,z…. The iteration of the method is Comparing this iteration with that of Newton's method previously discussed, we see that they both take the form , where vector is some search direction and is the step size. ∗ Write a Matlab m-ﬁle steepest. e. 6, 0. cost. Implement the following pseudocode in OptimizerGDLS. s. Example 1: top. ----- Voice-over: English(US) - Matthew at https://www. 1. 5) Homework 2 (due on March 30th, Wednesday by 14:00) Questions 1, 5, 11 and 12 require computations in Matlab. Line search: summary Exact minimization is only possible in particular cases. (Steepest Descent algorithm ) LINE SEARCH IN REAL LIFE Choose search direction: d = rf (xk) ⌧ ⌧ /2 Update iterate xk+1 = xk + ⌧ d Choose search direction: d = rf (xk) Update iterate xk+1 = xk + ⌧ d ⌧ =min ⌧ f (xk + ⌧ d) Backtracking/Armijo line search Exact line search While f (xk + ⌧ d) f (xk)+↵(⌧ d)T rf (xk) Steepest Descent-2 -1 0 1 2-1-0. x ∈ ℜn , where f (x) is diﬀerentiable. rosen. Iterate until stopping criterion is satis ed converges with exact or backtracking line search and upper bounded convergence rate results: c2(0;1) depends on m, x0, and line search With standard steepest descent, the learning rate is held constant throughout training. 1 Backtracking line search Complete the implementation of the backtracking line search in line_search_backtracking. The rst line of the function should be function [inform,x] = SteepDescentLS (fun,x,sdparams) Line search methods; put emphasis on Goldstein-Armijo line search with backtracking (Sec-tion 3. 01, β= 0. For slides see link from previous class. 5 1 1. 2 of Boyd and andenVberghe. shows the gradient descent after 8 steps. t. 3) Newton’s method for optimization (Sections 3. 1. Ter-minate when either krf (xk)k2 10 4 or 100000 function evaluations have been taken, whichever comes rst. Now, let us take another initial point and see how is of behaviourSteepest Descent Method with Backtracking Line search to Rosen Brock function. rosen. The figure shows different basins of attraction with different line styles, and indicates the directions of steepest descent with arrows. Use Matlab’s contour function to plot the contours (in 2D), and the progress of your method. •Gradient descent search determines a weight vector wthat minimizes E(w) by –Starting with an arbitrary initial weight vector –Repeatedly modifying it in small steps –At each step, weight vector is modified in the direction that produces the steepest descent along the error surface Steepest descent algorithm. Section III states formally the problem of interest. +23 –gis the direction of steepest descent, i. An exact line search involves starting with a relatively large step size ($\alpha$) for movement along the search direction $(d)$ and iteratively shrinking the step size until a decrease of the objective function is observed. Introduction to eigenvalues. Update. s. Use backtracking line search, calling the script you wrote in Part 1. cost. Matlab code for Armijo line search with backtracking method. m . It is easy to see that the negative gradient descent direction -\vg_k provides a descent direction. The Method of Steepest Descent 7 Steepest descent is a gradient algorithm where the step size is chosen to achieve the maximum amount of decrease of the objective function at each individual step. Use them to minimize the See full list on optimization. 4 Backtracking (Armijo Rule). Then, the line search stops with a step length i. m Rosenbrock fuction outputs f, grad(f), Hessian(f), needed for newtonbtls. Backtracking is easily implemented and works well in practice Nonlinear optimization c 2006 Jean-Philippe Vert, (Jean-Philippe. It find it disappointing that Matlab's Optimization Toolbox doesn't provide such a basic Newton-Raphson solver based on line-searches (one can write this program, of course, but paying a huge amount for an optimization toolbox should efficiently save this time). The initial guess is extremely important for Newton-like methods. , rf(x) is a continuous exact line search backtracking 0 2 4 6 8 10 10−15 10−10 10−5 100 105: step size C (:) exact line search backtracking 0 2 4 6 8 0 0. m Steepest descent with backtracking line search Algorithm 3. As mentioned previously, the gradient vector is orthogonal to the plane tangent to the isosurfaces of the function. Steepest descent: Then, line search or backtracking line search xi 1=xi 2 Backtracking Armijo line-search Global convergence of backtracking Armijo line-search Global convergence of steepest descent 3 Wolfe{Zoutendijk global convergence The Wolfe conditions The Armijo-Goldstein conditions 4 Algorithms for line-search Armijo Parabolic-Cubic search Wolfe linesearch Unconstrained minimization 2 / 58 Then, a line search algorithm is presented to find a step length satisfying the generalized Wolfe conditions. Corrected logic in plotting step of example low_rank_dist_completion For Scilab user: you must replace the Matlab comment '%' by its Scilab counterpart '//'. Here, we will implement a simple representation of gradient descent using python. Consider the problem of finding a solution to the following system of two nonlinear equations: g 1 (x,y)ºx 2 +y 2-1=0, g 2 (x,y)ºx 4-y 4 +xy=0. x:= x+t∆x. Any method that uses the steepest-descent direction as a search direction is a method of steepest descent. e. 4 3. Training a multilayer perceptron with the Matlab Neural Networks Toolbox, click here. Use = 1 4 to de ne the su cient-decrease criterion in the backtracking algorithm. 6, 0. Organization. e. 2 Computing step length. The default line-search algorithm for steepest descent should now be much faster. Exercise 2. ) apply when the line search is inexact. The gradient vector, g(x k), is also the direction of maximum rate of Line Search Methods Shefali Kulkarni-Thaker Consider the following unconstrained optimization problem min f(x) x2R Any optimization algorithm starts by an initial point x 0 and performs series of iterations to reach the optimal point x. Compute steepest descent direction Δ𝑥 q b. MATLAB implementations are given along the way. However, each step direction is restricted to be that of the initial trial step. The Zou-tendijk condition is a powerful tool for analyzing the convergence of line search methods. Matlog Reference (HTML) provides a listing of help information for each function in the Matlog toolbox. Matlog References. 1 (for problem 3. Right now I will only mention that Gram-Schmidt process is the method we will use and later you will see how it works and what it does. The direction of steepest descent for x f (x) at any point is dc=− or d=−c 2 Example. m Rosenbrock fuction outputs f, grad(f), Hessian(f), needed for newtonbtls. 4. Another example given in [HUL93, vol. 5 1 1. Convergence analysis will give us a better idea which one is just right. com/online/ Reco The steepest descent method usually does not converge without step length control except we x the step length to be su ciently small. 16/66 The search direction is given by the equation p k = - B k-1 ∇f k. 5. Fuzzy c-means clustering and least squares for training an approximator, click here. m is m-file for function f(x) % grad. Terminate if 3. Currently it seems that for my problem, the best way is to make a lot of fixed steps (where the direction is found by x algorithm; e. Our ﬁrst step, along the direction of steepest descent. 22). 1. The algorithm is given in the text book but not in the form of pseudo-code. It was designed for educational purposes. Main program . 2. Example: If the initial experiment produces yb= 5 2x 1 + 3x 2 + 6x 3. 2) Steepest descent (Sections 3. We also need Theorem to do it. As mentioned previously, the gradient vector is orthogonal to the plane tangent to the isosurfaces of the function. until stopping criterion is satisfied. Line search. If dk = 0, then stop. However, this implementation uses an Armijo linear search or a backtracking line-search. These extensions opened perspectives for the formulation of new algorithms for vector optimization problems. For the above example, line search reduces the number of iterations to arrive at a reasonable solution from several thousand to around 50. This deﬁnes a direction but not a step length. [email protected] There are approaches such a line search, that can reduce the number of iterations required. Descent methods are discussed along with exact line search and backtracking. Some practical advice regarding Matlab. m All computations reported in this book were done in MATLAB For Scilab user: you must replace the Matlab comment '%' by its Scilab counterpart '//'. sce (in Scilab) or numericaltour. This has been used at least in Armijo (1966). Update. • The minimization must be terminated when either kg(x Use of backtracking line search with Armijo’s condition Backtracking Line Search (1) Choose αˆ(> 0),ρ ∈ (0,1),c Behaviour of the steepest descent algorithm Determine the steepest descent direction ¢x 2. 1. Note from Step 2 and the fact that dk = rf(xk) is a descent direction, it follows that f(xk+1) < f(xk). Line Search Step Length Steepest Descent Method We deﬁne the steepest descent direction to be d k = −∇f(x k). Descent methods are discussed along with exact line search and backtracking. % MATLAB script file implementing the method of steepest descent % Inputs: % x = starting vector % xa, xb = x-interval used in contour plot % ya, yb = y-interval used in contour plot % tol = tolerance for stopping iteration % Required m-file % fp. 5 • backtracking line search almost as fast as exact l. Learn more Armijo Backtracking and Steepest Descent to find Local Minimum SANJEEV SHARMA: 3rd Dec 2010. 1 1 The Algorithm The problem we are interested in solving is: P : minimize f (x) s. m is m-file for gradient of f(x) x=[-1. In this case, the line search manages to adjust the step – Backtracking line search: starting from an initial s > 0, repeat s ←βs until the following suﬃcient decrease condition is satisﬁed: f(x+sd) < f(x)+αs∇f(x)⊤d with parameters α ∈(0, 1) and β ∈(0, 1). Line Search in SciPy The SciPy module scipy. 3. It is possible to visualize the line search and experiment with different update rules for the inverse Hessian in order to understand the optimization Connect and share knowledge within a single location that is structured and easy to search. m Steepest descent with backtracking line search Algorithm 3. The normalized steepest descent direction is defined as . This gives another insight into why the Newton step should be a good search direction, and a very good search direction when x is near x!. Steepest Descent Algorithm: Step 0 Given x0,setk 0 Step 1 dk = rf(xk). Compute gradient and Hessian () If is the identity matrix : Gradient Descent method If inverse of is approximated : Quasi-Newton method 2. g. kpk2 = kgkk2. The weights and biases are updated in the direction of the negative gradient of the performance function. Line-Search Methods for Smooth Unconstrained Optimization Daniel P. Á. choice of norm for steepest descent x(0) x(1) x(2) x(0) x(1) x(2) steepest descent with backtracking line search for two quadratic norms ellipses show {x| kx−x(k)kB= 1} equivalent interpretation of steepest descent with quadratic norm k·k B: gradient descent after change of variables ¯x= B1/2x you read “residual”, think “direction of steepest descent”. (4) But if the steepest descent direction is used with exact line search step size would bring to the zigzag behavior which make the convergence is very slowly. mccormick. x := x + t∆x. Steepest descent direction in Hessian norm The Newton step is also the steepest descent direction at x, for the quadratic norm deÞned by the Hessian ! 2f (x), i. After zooming in around the minimum, 3. Most proofs are omitted for space reasons and will appear elsewhere. m Newton method with backtracking line search Algorithm 3. 2 and 3. 001. 2 of Boyd and andenVberghe. Example 2 Matlab Example for Optimal Control I wanted to clarify the idea of the exact line search in steepest descent method. , L1- or weighted norm line search: choose step size t = exact line search: t. 1 (for problem 3. . 2,1. 5 • backtracking line search almost as fast as exact l. m and algorithm_newton. Step 3 Set xk+1 xk +↵kdk,k k +1. The notes to accompany the textbook (bring the corresponding part to each class): Algorithm 2. Example 2 Matlab Example for Optimal Control steepest descent method often yields zigzag phenomena in solving practical problems, which makes the algorithm converge to an optimal solution very slowly, or even fail to converge [5,6]. If you want to train a network using batch steepest descent, you should set the network trainFcn to traingd, and then call the function train. Foralphabacktracking, the parametersteplengthParam should be a vector of ̄α,candρ, in that order. This work was partially supported by CAPES and FAPEG. Go to Step 1. matlab function a = % Implementation of Line Search Algorithm with Strong Wolfe % Implementation of Steepest Descent with Backtracking % % Output While it is trivial to mention, if the gradient of a cost function is Lipschitz continuous, with Lipschitz constant L, then with choosing learning rate to be constant and of the size 1/L, one has a special case of Backtracking line search (for gradient descent). The gradient vector at a point, g(x In mathematics, the method of steepest descent or stationary-phase method or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. t. Step 2 Choose stepsize ↵k by performing an exact (or inexact) line search. e. 2D Newton's and Steepest Descent Methods in Matlab. The steepest descent method is convergent. To prevent the non-linear conjugate gradient method from restarting so often, this method was modified to accept the conjugate gradient step whenever a sufficient On this function, using a bisection-based backtracking line search with “Armijo” parameter in [0, 1 3] and starting at 2 3 , steepest descent generates the sequence 2−k 2(−1)k 3 , k = 1,2, , converging to 0 0 . The cost function is used as the descent function in the CSD method. We have to evaluate f(x_k + \alpha p), take the derivative with respect to \alpha, set the equation to zero, and then solve for alpha. choice of norm for steepest descent x(0) x(1) x(2) x(0) x(1) x(2) • steepest descent with backtracking line search for two quadratic norms • ellipses show {x | kx− x(k)k P = 1} • equivalent interpretation of steepest descent with quadratic norm k·kP: gradient descent after change of variables x¯ = P1/2x The steepest descent method uses the gradient vector at each point as the search direction for each iteration. Write a Matlab m- le steepest. 5. t. Gradient descent method general descent method with ∆x= −∇ f(x) given a starting point x ∈dom f. This figure shows two one-dimensional minima. Line-Search Methods for Smooth Unconstrained Optimization Daniel P. exact line search backtracking 0 2 4 6 8 10 10−15 10−10 10−5 100 105 k step size t (k) exact line search backtracking 0 2 4 6 8 0 0. Steepest Descent Minimize Rosenbrock by Steepest Descent minRosenBySD. (and much simpler) • clearly shows two phases in algorithm Unconstrained minimization 10. 1, p. 7 Quasi-Newton methods. Line search: Choose step size $$t$$ by either backtracking line search or exact line search. The idea is to modify the steplength t k by means of a positive parameter θ k , in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. (20 points) Program the steepest descent algorithm using the backtracking line search and minimize the Rosenbrock function: 2 f(x) = 100 (X2 – x1?)2 + (1 - Xı) A. –2is the learning rate and controls the magnitude of the change. GoTo 1 Search Direction The classical steepest descent method uses the exact line search. The lecture evolves around unconstrained minimization problems that might or might not enjoy closed form solutions. general descent line search method that satis es the vector-valued Wolfe conditions. We deﬁne the Steepest Descent update step to be sSD k = λ kd k for some λ k > 0. g. 2;1. 5. grad with a different number of inputs compared to problem. m Rosenbrock fuction steepdbtls. 2. m Lecture 8. 5 1 1. the Steepest Descent Method applied to Rosen Brockfunction with acktracking B Line search behaves when we start from particular 0. As for the same example, gradient descent after 100 steps in Figure 5:4, and gradient descent after 40 appropriately sized steps in Figure 5:5. m . The notes to accompany the textbook (bring the corresponding part to each class): Carreira-Perpiñán, M. In contrast, BFGS with the same line search rapidly reduces the function value towards −∞ (arbitrarily far, in exact choice of norm for steepest descent x(0) x(1) x(2) x(0) x(1) x(2) steepest descent with backtracking line search for two quadratic norms Pellipses show fxjkx x(k)k = 1g Pequivalent interpretation of steepest descent with quadratic norm kk : gradient descent after change of variables x = P1=2x shows choice of Phas strong e ect on speed of Problem 3. Steepest descent direction is orthogonal to the cost surface. m : Steepest Descent Polynomial line search routines: polyline. If you want to train a network using batch steepest descent, you should set the network trainFcn to traingd, and then call the function train. by the steepest descent method. If the learning rate is too small, the algorithm takes too long to converge. This lecture is outlined as follows: choice of norm for steepest descent x(0) x(1) x(2) x(0) x(1) x(2) steepest descent with backtracking line search for two quadratic norms Pellipses show fxjkx x(k)k = 1g Pequivalent interpretation of steepest descent with quadratic norm kk : gradient descent after change of variables x = P1=2x shows choice of Phas strong e ect on speed of Added the 'cyclic' steepest descent method, which performs an accurate line search, but then performs several iterations with the same step size under an inaccurate line search. In this lecture, we saw a quick proof that steeepest descent with exact line search is a slow algorithm. I α < 1 only needed when far from the solution. 1. repeat 1. The new functions can again be verified in CodeGrade. If we ask simply that f(x k+1 the backtracking line search algorithm is meant to find the optimal step size. 2 Gradient descent and Newton's method Complete the implementations of gradient descent and Newton's method in algorithm_gd. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is Backtracking: backtracking line search has roughly the same cost, both use O(n) ops per inner backtracking step Conditioning: Newton’s method is not a ected by a problem’s conditioning, but gradient descent can seriously degrade Fragility: Newton’s method may be empirically more sensitive to bugs/numerical errors, gradient descent is more j along the path of steepest ascent is proportional to the magnitude of the regression coe cient b j with the direction taken being the sign of the coe cient. Another example given in [HUL93, vol. Akaike () analyzing that armijo conditions MATLAB optimization steepest descent wolfe conditions I'm trying to apply steepest descent satifying strong wolfe conditions to the Rosenbruck function with inital x0=(1. Line search. edu The line search approach first finds a descent direction along which the objective function will be reduced and then computes a step size that determines how far should move along that direction. ) The implementation will change and probably will post it in another article. University of California, Merced 2 Global convergence of Steepest Descent algorithm with back-tracking line search In FV, global convergence of the Steepest Descent algorithm is proven under the following assump-tions: Stepsize at each iteration is found by exact line search, and Function f(x) being minimized is continuously di erentiable, e. 1) rosen2. 22 steepdbtls. This is the last choice to resort in Matlab function fminunc (unconstrained minimization). The performance of the algorithm is very sensitive to the proper setting of the learning rate. s. 1 Backtracking line search Complete the implementation of the backtracking line search in line_search_backtracking. The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. 2[ 0; 0]. m that implements the method of steepest descent with a back-tracking line search. 5 0 0. Line search: Choose 𝑡via exact or backtracking line search. If kkk dHg is the search direction at each iteration Backtracking line search August 25, 2015 in Convex Optimization , EE6151: Convex optimization algorithms | Tags: Convex optimization algorithms , gradient descent , strongly convex Today we will look at a variant of gradient descent called the steepest descent algorithm. First, we look at the so-called normalized steepest descent direction. It can be slow if tis too small . The search direction Steepest descent gradient method for on-line training a multilayer perceptron, click here. 2. 1 (for problem 3. g. direction along which J decreases the most. 3 (3. e. Program the steepest descent and Newton algorithms using the backtracking line search, Algorithm 3. m . 1. 3. 22). general descent line search method that satis es the vector-valued Wolfe conditions. We would like to choose λ k so that f(x) decreases suﬃciently. org) – p. , "u"! 2f (x ) =(u T! 2f (x)u)1/2. In other words, we will choose a point x(1) = x(0) + r(0): (12) The question is, how big a step should we take? A line search is a procedure that chooses to minimize f along a line x(0) + r(0). m Steepest Descent is simple but slow Newton’s method complex but fast Origins not clear Raphson became member of the Royal Society in 1691 for his book “Analysis Aequationum Universalis” with Newton method. e. (0,0)) •problem: steepest descent directions may undergo Write a function in Matlab or other suitable programming language to im-plement Newton's method for optimization, using the Arrnijo/backtracking line search and switching to steepest descent (dk = -of (ick)) if the Hessian matrix Hess f(xk) is The batch steepest descent training function is traingd. m newtonbtls. 1) rosen2. • pk is cheap to compute. 6 Steepest descent method. stopping criterion usually of the form k∇f(x)k2 ≤ǫ The basin of attraction for steepest descent is the set of initial values that lead to the same local minimum. – Constant step size: sk = s (to be further studied later in this note). 5 2 2. Summary of quadratic forms and convergence of steepest descent method. backtracking line search . 23) and terminate the algorithms when ||nabla f(x_k)|| gistfile1. This matrix is updated using the BFGS method. Choose a step size t > 0. 9,2). Note on coercivity - Lecture notes 2 Samenvatting BPR Introduction to Matlab Syllabus Hoorcollege Wet Bopz 16 Hoorcollege privaatrecht 2. jj jj= 1 2:11 Steepest Descent Direction But the The code highlights the Gradient Descent method. Set the initial step length = 1, initial point (1,2, 1. 5 2 • backtracking parameters U = 0. Test the method on the two example above, and the chained Rosenbrock function. Choose a step size t>0 3. We analyze the following methods for f (x )= 1 2 x T B x with the symmetric positive de nite matrix. 5 3 Newton method with line search gradient < 1e-3 after 15 iterations ellipses show successive quadratic approximations • The algorithm converges in only 15 iterations – far superior to steepest descent The batch steepest descent training function is traingd. 3 Armijo Rule As an alternative approach to optimal line search, the Armijo rule, also known as backtracking line search, ensures that The early step size procedure of steepest descent is considered as step size which obtained by exact line search, α αk k kargmin ( ( )){f x g} α = + −. implemented to steepest descent method [2,9,13], we found that Armijo line search and Backtracking line search  are easier to apply and more effective compared to others. 363] applies to a subgradient method in which the search direction is de ned by the steepest descent direction, i. Let f (x) be a differentiable function with respect to . , the step size t is found from exact one-dimensional minimization4 backtracking line search: start with some not too small t, then reduce t until f(x General line search method 0:0 (slides 05:40) Choice of step size: Exact optimization, Backtracking, Armijo stopping rule 5:53 (slides 17:05, 20:44) Steepest descent (gradient descent) 20:55 (slides 36:07) Newton method 36:22 (slides 45:18, 56:04) End - 56:15 (after this time - garbage from previous lecture) Practice set 1 - Huiswerk opgaven week 1 Niet linear optimaliseren. 5 with emphasis on Section 3. These extensions opened perspectives for the formulation of new algorithms for vector optimization problems. 9) Nov 2: Eigenvalues and eigenvectors, power method and variants, QR algorithm The gradient descent method is therefore also called steepest descent or down hill method. m and algorithm_newton. 2. Now, we give the algorithm of the steepest descent method which refers to the exact as well as to the inexact line search. 4. Now, we give the algorithm of the steepest descent method which refers to the exact as well as to the inexact line search. /0=". The constrained steepest descent (CSD) method, when there are active constraints, is based on using the cost function gradient as the search direction. If the learning rate is set too high, the algorithm can oscillate and become unstable. 1) DrugModel. This is algorithm 9. Various line search termination conditions can be used to establish this result, but for concreteness we will consider only the Wolfe conditions Program on Optimisation Steepest Descent. 1, page 37) to find alpha _k. Matlog: Logistics Engineering using Matlab, presents detailed example of the use of Matlog to solve a facility location and allocation problem. Levenberg-Marquardt method for training a Takagi-Sugeno fuzzy system, click here. This lecture is outlined as follows: Steepest descent direction. Any Matlab $\begingroup$ @DeltaIV and Cliff - Hmm, exact line search could be very expensive (actual minimization in the direction of the gradient), but backtracking line search can be done by testing the Loss function value in the direction of the gradient by simply doing a few forward passes (which iterative mini-batch and SGD requires anyways). for iter = 1 : iterMax Evaluate f and g at x % Line search alpha = alphaInit while true dx = -alpha*g Evaluate fnew at x + dx if fnew < f break end alpha *= gamma; end x += dx if norm(dx) < tol break end end Steepest Descent Method The Algorithm Given a starting point 𝑥∈dom 𝑓 Repeat 1. This problem is a practice on line search methods. Levenberg-Marquardt method for training a Takagi-Sugeno fuzzy system, click here. -Line search methods, in particular-Backtracking line search-Exact line search-Normalized steepest descent-Newton steps Fundamental problem of the method: local minima Local minima: pic of the MATLAB demo The iterations of the algorithm converge to a local minimum Local minima: pic of the MATLAB demo View of the algorithm is « myopic » •Move win the direction of steepest descent: ". 2. m Steepest descent with backtracking line search Algorithm 3. The default line-search algorithm for steepest descent should now be much faster. 8]; Well, steepest descent is known to be slow, which is why nobody ever uses it, except as a textbook example. • pk solves the problem minp ∈ Rn mL k(x k + p) = fk + [gk]Tp s. An exact line search involves starting with a relatively large step size ($\alpha$) for movement along the search direction $(d)$ and iteratively shrinking the step size until a decrease of the objective function is observed. 1. 2. Reduction parameter 0. At each step, starting from the point , we conduct a line search in the direction until a minimizer, , is found. 6. argmin s f(x+sDx), i. Setup. Show that the direction of steepest descent for the Euclidean ‘ 2 norm is 4x= r f(x), and thus steepest descent with respect to the Euclidean norm gradient descent: Dx µ Ñf(x) = steepest descent: Dx. Recall steepest descent method which uses the exact line search procedure in determining the step size is known for being quite slow in most real-world problems and is therefore not widely used. 3. 5. 3. There is only one training function Gradient descent method general descent method with ∆x= −∇f(x) Algorithm 2 Gradient descent method. Here we update our weight matrix W by taking a --step in the negative direction of the gradient, thereby allowing us to move towards the bottom of the basin of the loss landscape (hence the term, gradient descent ). Stop the method when either the gradient of the objective function has norm less than 10 4 or 1;000 steps have been made. 5 0 0. Backtracking line search One way to adaptively choose the step size is to usebacktracking line search: First x parameters 0< < 1and < =2 Then at each iteration, start with t= 1, and while f(x trf(x)) >f(x) tkrf(x)k2 2; update t= t Simple and tends to work pretty well in practice 11 choice of norm for steepest descent x(0) x(1) x(2) x(0) x(1) x(2) • steepest descent with backtracking line search for two quadratic norms • ellipses show {x | kx− x(k)k P = 1} • equivalent interpretation of steepest descent with quadratic norm k·kP: gradient descent after change of variables x¯ = P1/2x choice of norm for steepest descent PSfrag replacements x(0) x(1) PSfragxreplacements(2) x(0) x(1) x(2) † steepest descent with backtracking line search for two quadratic norms † ellipses show fxj kx¡x(k)kP = 1g † equivalent interpretation of steepest descent with quadratic norm k¢kP: gradient descent after change of variables x„ = P1=2x Exact line search At each iteration, do the best we can along the direction of the gradient, t= argmin s 0 f(x srf(x)) Usually not possible to do this minimization exactly Approximations to exact line search are often not much more e cient than backtracking, and it’s not worth it 13 Descent MethodsLine Search-The backtracking exit inequality f(x 0 + d) f(x 0) + r Tf(x 0)d holds for 2[0; 0]. 2. e. , the negative of the element of the subdi erential with smallest norm, again showing that use of an exact stochastic gradient-descent for multivariate regression version 1. Use the steepest descent direction to search for the minimum for 2 f (,xx12)=25x1+x2 starting at [ ] x(0) = 13T with a step size of α=. A steepest descent method uses the direction of steepest descent as a descent direction at each iteration, taking a step in this direction with a stepsize chosen by some line-search method. I was trying to write a simple algorithm performs the gradient descent method but I get confused how to select the step size. The Newton algorithm The most powerful line search method when close to the minimum is the Newton search direction In general setting of steepest descent algorithm we have, x n + 1 = x n − α G n, where α is the step size and G n is the gradient evaluated at the point x n. In other words, the step length obtained by backtracking line search satis es min f1; 0g: calculate a search direction pk from xk ensure that this direction is a descent direction, i. 1, p. MATLAB files . 5 1 1. Newton method with line search gradient < 1e-3 after 15 iterations-2 -1 0 1 2-1-0. Then the steepest descent method is not the fastest one among the line search methods. searching-eye . 4. Write a Matlab program (GradientDescent. Implement the steepest descend method with an Armijo line search in Matlab. Line search. 5 2 2. steepestdescent Steepest descent 0008 % Base line-search algorithm for descent methods, based on a simple 0009 % backtracking method. ∆x := −∇ f(x). 5. The steepest descent method can converge to a local maximum point starting from a point where the gradient of the function is nonzero. For other algorithms it describes how far p k can deviate from the steepest descent direction and still give rise to a globally convergent iteration. until stopping criterion is satisﬁed. Theorem Method of Steepest Descent Figure 4. 2. northwestern. m that implements the method of steepest descent with a back-tracking line search. . Backtracking Line Search Algorithm and Example. Fuzzy c-means clustering and least squares for training an approximator, click here. Intuitively, it would seem that pk is the best search-direction one This archive includes a set of functions introducing into optimization and line search techniques. 2 Gradient descent and Newton's method Complete the implementations of gradient descent and Newton's method in algorithm_gd. Recently, several modiﬁcations to the steepest descent method have been presented to overcome its 8) Oct 26: Newton method in optimization, linesearch, convergence of descent methods. 1. Line 68 is the most critical step in our algorithm and where the actual gradient descent takes place. m to implement the steepest descent method, with dk = f (xk). Your function must include the following features. While this step is normally constructed to be a descent direction, it may be only a weak descent direction, especially if the Jacobian is ill-conditioned . Q4 1. 3. This in turn helps us define the steepest descent direction. Implement steepest descent method and Newton method, both with backtracking line search, for minimizing a function of the form f(x 1;:::;x 100) = X 100 j=1 c jx j X 500 i=1 log b i X 100 j=1 a ijx j : Your implementation just needs to work for this speci c objective function (as opposed to an In Matlab, you should do “backslash”: x = A nb Algorithm 1 Backtracking line search 1: Choose 0 >0, 1 f 2C2, using steepest descent method 2 Exact line Some Matlab code to plot contours of functions, steepest descent, backtracking line search, convergence order estimation, numerical gradient and Hessian, etc. , the negative of the element of the subdi erential with smallest norm, again showing that use of an exact Plain gradient descent, stepsize adaptation & monotonicity, steepest descent, conjugate gradient, Rprop Gradient descent methods – outline Plain gradient descent (with adaptive stepsize) Steepest descent (w. Make sure reduces 5. Both, unlike a line search that enforces the so-called Wolfe condition (see the book by Nocedal and Wright), do not guarantee that y k T s k >0 (here we use notation of Nocedal and Wright). • pk is a descent direction. steepest descent, Newton method, and back-tracking line search: demonstrations and invariance Ed Bueler Math 661 Optimization September 27, 2016 I am not sure steepest descent will be the best choice here. 1 for problem 3. Recommandation: You should create a text file named for instance numericaltour. 2 (Backtracking line search with Armijo rule). Readings Nocedal and Wright Chapter 3 Proof of global convergence of line search Matlab matlab/fig31fun. (and much simpler) • clearly shows two phases in algorithm Unconstrained minimization 10–22 d¯=−∇f(¯x) is called thedirection of steepest descentat the pointx¯. Implement steepest descent method and Newton method, both with backtracking line search, for minimizing a function of the form f(x 1;:::;x 100) = X 100 j=1 c jx j X 500 i=1 log b i X 100 j=1 a ijx j : Your implementation just needs to work for this speci c objective function (as opposed to an apply when the line search is inexact. %input: f = ftn to be Search Direction (d) • Steepest Descent Method (Gradient method) • Conjugate Gradient Method • NewtonNewtons ’s MethodMethod (Uses second order partial derivative information) • Quasi‐Newton Methods (Approximates Hessian matrix and its inverse using first order derivative) 2. 5 2 • backtracking parameters α= 0. Your function must include the following features. Let’s solve the first iteration for alpha and then compute alpha after every step with the help of R. Choose step size t via exact or backtracking line search. Compute search direction xt= r f(xt) (special case of descent method) 2. m Drug Model Script Then we saw a proof that a line search method is globally convergent. Logic bug in getGradient when using problem. exact line search backtracking 0 2 4 6 8 10 10−15 10−10 10−5 100 105 k step size t (k) exact line search backtracking 0 2 4 6 8 0 0. During the iterations if optimum step length is not possible then it takes a fixed step length as 0. •stopping criterion usually of the form k∇ f(x)k2 ≤ǫ The classical steepest descent method uses the exact line search. = 1 if 0 1 ii. m (in Matlab) to write all the Scilab/Matlab command you want to execute. 1 + e (0) (1) 0. 2. This relies on the fact, that x2Rn solves Ax= bif and only if xis the minimal point of the second-order polynomial in n variables f(x) = 1 2 xTAx xTb: Gradient Descent with Backtracking Line Search. pessimist steepest descent method is globally convergent. 6. m (in Matlab) to write all the Scilab/Matlab command you want to execute. Use them to minimize the Rosenbrock function (2. r. MATLAB implementations are given along the way. There is only one training function In mathematics, the method of steepest descent or stationary-phase method or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. 1 Principles of Line Search Methods. . Why does Armijo's rule fit here? 2. cpp:. Line Search Methods MA. 4. Steepest descent is usually used for a states with more then one dimension. Freund February, 2004 2004 Massachusetts Institute of Technology. When close enough, function decreases via the steepest descent direction or other alternative aggregated directions of descent. Set the initial step length α 0 = 1 and print the step length used by each method at each Backtracking line-search methods are relatively easy to implement. 1. Recommandation: You should create a text file named for instance numericaltour. Choose a step size t> 0. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University September 15, 2020 Outline 1 Generic Linesearch Framework 2 Computing a descent direction p k (search direction) Steepest descent direction Modiﬁed Newton direction the Steepest Descent Method applied to Rosen Brockfunction with acktracking B Line search behaves when we start from particular 0. 1. argmin kDxk=1 Ñf Dx kkcould be, e. I am using the backtracking line search cause I know that to use that I just need to saisfy one of the Wolfe conditions and to be honest also because it's the only method we studied so far, so I don't know how to justify properly my MATLAB lab1 , MATLAB lab2 , and Introduction to MATLAB by exercises. 5 • backtracking line search almost as fast as exact l. 1. I Na¨ıve approach: use basis or steepest descent directions F Very ineﬃcient in worse case I Try new directions, keep good ones: Powell’s method or conjugate gradients I Use Newton or quasi-Newton direction F Generally fastest method 2 Do univariate minimization along that direction, this step is called a “line search” Use of backtracking line search with Armijo’s condition Backtracking Line Search (1) Choose αˆ(> 0),ρ ∈ (0,1),c Behaviour of the steepest descent algorithm The lecture evolves around unconstrained minimization problems that might or might not enjoy closed form solutions. given a starting point x∈domf. 2. It's an advanced strategy with respect to classic Armijo method. (2011): EECS260 Optimization: Lecture notes. m Rosenbrock fuction steepdbtls. 2. 2 Backtracking line search Adaptively choose the Program the steepest descent and Newton algorithms using the backtracking line search (Algorithm 3. Consider the unconstrained optimization problem f(x)=x 2 +2x+1, – Compute the stationary point of the function from the ﬁrst order necessary conditions – Let x 0 = 4, preform 2 steps of steepest descent algorithm with (a) exact line search Line search The idea of line search is to optimize a given function with respect to a single variable Optimization algorithms for multivariable problems generate iteratively search directions in which better solutions are found – Line search is used to find these! Exact minimum is not required but an approximation of it which is within a given Solving for "problem (3. 1. To show that -\vg_k is a descent direction, consider Implementing Gradient Descent in Python. Update xt+1 = x t+ t x 4. When exact line search is used, scale factors in the direction have In this paper we introduce an acceleration of gradient descent algorithm with backtracking. Descent direction failure 2 Backtracking Armijo line-search Global convergence of backtracking Armijo line-search Global convergence of steepest descent 3 Wolfe–Zoutendijk global convergence The Wolfe conditions The Armijo-Goldstein conditions 4 Algorithms for line-search Armijo Parabolic-Cubic search Wolfe linesearch Unconstrained Line Search Methods: steep. 1 + —0. 3. Raphson published it 50 years before Newton. t. This work was partially supported by CAPES and FAPEG. Newton's iteration scheme Determine the steepest descent direction ¢x 2. A Newton's Method Example 1 Example 2 B Steepest Descent Method Example 3. 2), and print the step length at each iteration. Another line search method is the exact line search. The steepest descent method [2 3] is a line search techniqu e. 4. The constrained steepest descent method solves two subproblems: the search direction and step size determination. In gradient descent, the search direction \vd^k : = -\vg_k, where \vg_k = abla f(\vx^k). The function value at the I wanted to clarify the idea of the exact line search in steepest descent method. steepest descent backtracking line search matlab

Steepest descent backtracking line search matlab