Support Vector Machines III
Updated:
15. Support Vector Machines III
Learning outcomes
- Derive the dual formulation of a constrained optimization problem
Substitution
- In the previous SVM, we said we are gonna !, subject to the constraints. So that we could solve a higher dimensional problem. But the problem is this way, the constraints become higher dimensional as well, which makes it hard to solve.
- So we aim to find a different formulation of the SVM.
Duality theory
Example – unconstrained
- In unconstrained optimization, for a point to be stationary, the derivative has to be zero. Solving differential eq = 0, gives us the stationary points.
- The gradient is zero on that point – we have a convex x2 , and min is at x= 0
- And the tangent line in the min has to be horizontal.
Example – constrained
- Constraint is a condition over the input of a function, which determines what points are feasible - aka what could be accepted as a solution.
- 1D - For equality constrain – same function but we gonna minimize
subject to x=2.
- The constraint reduces the feasibility to just that one min point. It doesn’t leave much space to optimization.
- In higher dimension, constraints are also high dimensional –so it is
not just one point but is a particular line or a surface.
- E.g. in 2D, the unconstrained input is the whole real plane, but a constraint might reduce the region to a circle (or any other line) – see below.
- Thing to note is, at the minimum, the only point we allow, the gradient isn’t zero anymore. In constraint optimization gradient doesn’t have to be zero. Hence we needa find a diff condition to identify the point!
The Lagrangian
- Original function = f(x) = x2
- Lagrangian function = original x2 + λ(x-2)
- λ(x-2) Lagrangian multipliers
- We introduced one more variable λ, which makes the problem 2D function.
- Crazy thing is it is possible to transform the problem into higher dimension, where the constraint foes away, by adding one variable per constraint, and the solution to the new problem is also the solution to the original unconstrained !!
- So, we differentiate in terms of x, and in terms of λ. Then we sub
the value to get the solution of:
- the solution is x=2, λ= -4.
- And this is the solution of this Lagrangian function, BUT ALSO the solution for the original function!!
- Q: Why do we have min and max? We are looking for a set of
point, not a min. Set of point is a point where it’s a minimum for some variable and maximum for some other variable.
- Looking like a saddle – you and ∩ and the point where it meets = set of point
- We look for a saddle point for this function and by optimizing the Lagrangian, we can solve an unconstrained problem as well as original unconstrained.
- -> way to solve constrained optimization problem
Lagrangian Sliced
- This is what it looks like at the optimum!! (2,-4)
- When λ = -4, the graph is a parabola. And when x =2, graph is flat.
- Our point is a minimum on the LHS and maximum on the RHS
Necessary conditions
- We start from constraints to new problem where we have a function with more variable – λ, a Lagrangian variable.
- The gradient of Lagrangian has two components: derivative in respect
to original (x) values and to the Lagrange multipliers (λ).
- - system of equations.
- Two eq has diff meaning – why are they important?
- Derivative with respect to λ and set to 0, we are retrieving original constraint. We are reinforcing the original constraint.
- If we solve the above equations = 0, we find the solution to the original problem
Lagrange Multipliers
- Lagrange multipliers – standard approach to solving equations
with equality constraints
- Support vectors are those vectors in the active set of the constraints.
- And for support vectors – constraints are equalities.
- Red arrows – gradients in different points.
- In more dimensions, the constraint will be these circles. Imagine
function we are minimizing is this circle
- Min is at (0,0), and it keeps increasing as it goes
- Circles – they are contour lines, and all functions have same values along this line.
- In a constrained problem, we are imposing the solution to be on some line with the eq h(x) = 0. I want the min of the function, but this point has to lie on the line.
- Q: What’s special about the point that’s minimum in a constrain
problem?
- For any point, the gradient of the constraint is always orthogonal to the function.
- Red arrows – gradients in different points for our constraint which is a straight line.
- If I look at the gradient, its pointing towards the minimum. We can move along the constraint (red line) and keep improving the object function.
- Gradient of the constraint and the gradient of the function are
not parallel!! red is pointing
outwards when the black is pointing towards the min.
- Can improve the function by moving along the constrain.
- When we reach to that middle red point, gradient of the function and the constrain ARE parallel.
Lagrange Multipliers II
- If I slide from red line and get to red point – this is the min and
the gradient of the constraints and the
gradient of the function are parallel!!!!! improving
the function
- If they are parallel cannot improve the function anymore by staying on the constraint, gotta jump
- If we move along the constraint, it’s not exactly the direction of the gradient, but we are still improving the value of the function– until we reach the min!!
- Gradient is the direction of fastest improvement – We have to follow the anti-gradient!! Coz we wanna minimize this anti gradient must be orthogonal to the line
- So how do we know if the grad of constraint and grad of the function
are parallel?
- - when gradients are parallel
- There’s a constant such that if I multiply one for the constant, I get the output – λ.
- Vector of the anti-gradient and the grad of constraint only differ by a constant!!!! - λ
- Q: Does the direction matter?- important part
- If the arrow points the other way, does it make a difference? Nope it doesn’t. Because we have to stay on the line anyways. So for equality constraint, it doesn’t matter if we multiply this by -1: we still have the same eq. Hence the gradient is irrelevant.
- Neuron is an inequality, so if we multiply by -1 we change where the neuron is positive.
Lagrange Multipliers III
- First of the two equations, where
derivative in respect to x = 0, is nothing but this eq we just
derived, that say derivative of x in respect to
the Lagrangian, is the same as drivate of derivative of f(x) + λ
(derivative of the constraint) .
- Since derivative is a linear operator, we move that −∇x f(x) to the other side to equal to 0.
- If we derive the Lagrangian with respect to x, we obtain the equation imposing our parallelism. new constraint
- New necessary condition for one to be a constrained minimum:
- Gradient of the function and constraint has to be parallel AND the point has to be on the constraint!!
- - this ensures that the gradients are parallel
- For constrained – derivative have to be zero.
Lagrange multiplier IV
- Lagrange function: function of the original variables + the
Lagrange multipliers. And we add one multiplier per constraint.
- → formulation of Lagrangian function.
- E.g. ! where original = x2, Lagrange mult = λ(x-2)
-
The stationary points of Lagrangian are the optimal solutions to the original problem!!
- When we solve the grad to be zero, we are imposing 2 conditions:
- Gradients have to be parallel – ensured by
- Point satisfies the constraints – ensured by
- the beauty of this is that these characteristics are already encoded in the gradient of Lagrange!!
- Now what do we do with this function? – we derive a new function of the sole Lagrangian multipliers!!
Q: How many λ variable did we add?
- As we said in the previous lecture, as many as points in the dataset!!
- We will have as many λs as constraints!! Aka the dataset
The Dual Problem
- So linear dimension → Lagrangian → now we gonna reduce the number of dimensions again by only using one of the 2 derivatives.
- With the new Lagrangian function we derived; we can formulate a new function solely of Lagrangian multipliers. The maximum of this function = minimum of the original constrained problem!!
- So how do we formulate this new function?
- Original higher dimension of Lagrange is reduced again!! By only using one of the two equations – now we have eq of just λ. q(λ)
- So we first derive the Lagrangian function, to, we find the solution for x, then we substitue the x and find a eq made up of only λ!!! – q(λ)
- then we derive a new eq q(λ) and solve for q(λ) = 0.
- Steps
- Start with some constrain optimization problem, we compile the constraint into Lagrangian. Each constraint multiplied by the Lagrangian multiplier We derive the L(x, λ)
- We solve for x and substitute back to the Lagrangian to get an eq just of λ – q(λ)
- We derive q(λ), and q(λ)= 0.
- This point λ, is the max of Lagrange and min for original the solution coincides!!!
- New function is called the dual formulation of the original
problem, which is called the primal
problem.
- And again, solution to the dual = solution to the primal.
- We take every constraint and we enforce it to Lagrange – we go from 1 to 2 dimension then we flat it down and we are in a total different plane.
- Why do we do this?
- The dual problem will have one variable per constraint. In original we will have more constraints than variables, then in dual we have more variables. We don’t do this to end up in a lower dimensional. But we do this bc dual problem has property that we like
- By going through a Lagrangian, we end up on a problem such that the solution of dual and the primal is the same!!
Duality
- The functions are functions of diff variables – The two functions are in different spaces – but they have this property: f(x) is the upper bound of the other q(λ).
- Not only it is the upper bound, but they actually touch!! this is called strong duality!!
- The min coincides with the max of the other so we can solve one or the other.
Inequality Constraints
- So far it was simple coz its an equality constraint. In reality, it will be way more complex – we have inequality constraints.
- LHS: We want the solution to be this area of domain, so the constraint is x>= -3. unconstraint min is also the constrain min – and in this case, our min to begin with is already in the constraint, so the constraint has no effect.
- RHS – we add more constraint. -3 <= x <= -2. So we only have
one small area that fulfils both constraints.
- So btwn -2 and -3 where is our minimum?
- First constraint doesn’t do much and we can get rid of it– its inactive
- However, the other one is forcing the min to be somewhere else –
constraint active!!!!!!!!!!
- And when the constraint is active, the solution will be on the edge aka at the equality. So -2 in this case.
- This rings the support vector!!! Points that
satisfies the SVM at the equality!! it is the ones that
are closest to line that makes the boundary what it is.
- Just like how we could remove points that are not support vectors, we can remove constraints that are inactive.
Complementary Slackness
- Whenever we have constraints such as some function is <= 0, its the canonical form, we are minimizing the functionsubject to inequality constraints.
- → complementary slackness
- Some of the constrains will be inactive – therefore it doesn’t matter, and we get rid of it.
- For active constraints, we need to enforce that this constraint is satisfied at the equality.
- We can do that by , the product of Lagrangian
multiplier and the constraint has to be zero. For this to happen:
- Either the constraint is zero so its satisfied at the equality so λ can be whatever,
- Or if the constraint is not satisfied at the equality, then lambda has to be zero. Then constraint disappears from Lagrangian.
- We have enforced that: either the constraint is satisfied on equality OR Lagrangian multiplier should go away!!
KKT Multipliers
- - the gradient of constraitn and anti-gradient of function in same direction
- Before we said that the constraint has to be on the line. But now the point has to be on a half plane. solution could be anywhere in this area. And which area we want matters. Bc if we wanted the area below the line, the constraint would be inactive.
- Above – then constraint is active, and we need the point where the
gradients are parallel – not only gradients have to be parallel, but
also have to have same direction!!!
- The gradient of the constrain and anti-gradient of the function has to be on the same direction.
- If it wasn’t in the same direction, you could move the point towards the minimum by staying in the area, so the constraint would be inactive
- How do we impose this?
- The line would be parallel, but now we need to have lambdas to be positive. That way we are sure they have the same direction.
Leave a comment