Here's a neat one from optimization: the Alternating Direction Method of Multipliers (ADMM) algorithm.
Given an uncoupled and convex objective function of two variables (the variables themselves could be vectors) and a linear constraint coupling the two variables:
minf1(x1)+f2(x2)
s.t.A1x1+A2x2=b
The Augmented Lagrangian function for this optimization problem would then be
Lρ(x1,x2,λ)=f1(x1)+f2(x2)+λT(A1x1+A2x2−b)+ρ2||A1x1+A2x2−b||22
The ADMM algorithm roughly works by performing a "Gauss-Seidel" splitting on the augmented Lagrangian function for this optimization problem by minimizing Lρ(x1,x2,λ) first with respect to x1 (while x2,λ remain fixed), then by minimizing Lρ(x1,x2,λ) with respect to x2 (while x1,λ remain fixed), then by updating λ. This cycle goes on until a stopping criterion is reached.
(Note: some researchers such as Eckstein discard the Gauss-Siedel splitting view in favor of proximal operators, for example see http://rutcor.rutgers.edu/pub/rrr/reports2012/32_2012.pdf )
For convex problems, this algorithm has been proven to converge - for two sets of variables. This is not the case for three variables. For example, the optimization problem
minf1(x1)+f2(x2)+f3(x3)
s.t.A1x1+A2x2+A3x3=b
Even if all the f are convex, the ADMM-like approach (minimizing the Augmented Lagrangian with respect to each variable xi, then updating the dual variable λ) is NOT guaranteed to converge, as was shown in this paper.
https://web.stanford.edu/~yyye/ADMM-final.pdf