0% found this document useful (0 votes)
12 views29 pages

A F L A - T C: Daptive Ederated Earning With UTO Uned Lients

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views29 pages

A F L A - T C: Daptive Ederated Earning With UTO Uned Lients

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Published as a conference paper at ICLR 2024

A DAPTIVE F EDERATED L EARNING


WITH AUTO -T UNED C LIENTS

Junhyung Lyle Kim⋆∗ Mohammad Taha Toghani† César A. Uribe† & Anastasios Kyrillidis⋆

Department of Computer Science, † Department of Electrical and Computer Engineering
Rice University, Houston, TX 77005, USA
{jlylekim, mttoghani, cauribe, anastasios}@rice.edu
arXiv:2306.11201v3 [cs.LG] 2 May 2024

A BSTRACT

Federated learning (FL) is a distributed machine learning framework where the


global model of a central server is trained via multiple collaborative steps by
participating clients without sharing their data. While being a flexible framework,
where the distribution of local data, participation rate, and computing power of
each client can greatly vary, such flexibility gives rise to many new challenges,
especially in the hyperparameter tuning on the client side. We propose ∆-SGD,
a simple step size rule for SGD that enables each client to use its own step size
by adapting to the local smoothness of the function each client is optimizing. We
provide theoretical and empirical results where the benefit of the client adaptivity
is shown in various FL scenarios.

1 I NTRODUCTION

Federated Learning (FL) is a machine learning framework that enables multiple clients to collabora-
tively learn a global model in a distributed manner. Each client trains the model on their local data,
and then sends only the updated model parameters to a central server for aggregation. Mathematically,
FL aims to solve the following optimization problem:
m
1 X
min f (x) := fi (x), (1)
x∈Rd m i=1

where fi (x) := Ez∼Di [Fi (x, z)] is the loss function of the i-th client, and m is the number of clients.
A key property of FL its flexibility in how various clients participate in the overall training procedure.
The number of clients, their participation rates, and computing power available to each client can
vary and change at any time during the training. Additionally, the local data of each client is not
shared with others, resulting in better data privacy (McMahan et al., 2017; Agarwal et al., 2018).
While advantageous, such flexibility also introduces a plethora of new challenges, notably: i) how the
server aggregates the local information coming from each client, and ii) how to make sure each client
meaningfully “learns” using their local data and computing device. The first challenge was partially
addressed in Reddi et al. (2021), where adaptive optimization methods such as Adam (Kingma & Ba,
2014) was utilized in the aggregation step. Yet, the second challenge remains largely unaddressed.
Local data of each client is not shared, which intrinsically introduces heterogeneity in terms of the
size and the distribution of local datasets. That is, Di differs for each client i, as well as the number
of samples z ∼ Di . Consequently, fi (x) can vastly differ from client to client, making the problem
in (1) hard to optimize. Moreover, the sheer amount of local updates is far larger than the number of
aggregation steps, due to the high communication cost—typically 3-4× orders of magnitude more
expensive than local computation—in distributed settings (Lan et al., 2020).
As a result, extensive fine-tuning of the client optimizers is often required to achieve good performance
in FL scenarios. For instance, experimental results of the well-known FedAvg algorithm were
obtained after performing a grid-search of typically 11-13 step sizes of the clients’ SGD (McMahan

This paper extends Kim et al. (2023) presented at the ICML 2023 Federated Learning Workshop.

1
Published as a conference paper at ICLR 2024

Figure 1: Illustration of the effect of not properly tuning the client step sizes. In (A), each client
optimizer uses the best step size from grid-search. Then, the same step size from (A) is intentionally
used in settings (B) and (C). Only ∆-SGD works well across all settings without additional tuning.

et al., 2017, Section 3), as SGD (and its variants) are highly sensitive to the step size (Toulis & Airoldi,
2017; Assran & Rabbat, 2020; Kim et al., 2022b). Similarly, in Reddi et al. (2021), 6 different client
step sizes were grid-searched for different tasks, and not surprisingly, each task requires a different
client step size to obtain the best result, regardless of the server-side adaptivity (Reddi et al., 2021,
Table 8). Importantly, even these “fine-tunings” are done under the setting that all clients use the
same step size, which is sub-optimal, given that fi can be vastly different per client; we analyze this
further in Section 2.
Initial examination as motivation. The implication of not properly tuning the client optimizer is
highlighted in Figure 1. We plot the progress of test accuracies for different client optimizers, where,
for all the other test cases, we intentionally use the same step size rules that were fine-tuned for the
task in Figure 1(A). There, we train a ResNet-18 for CIFAR-10 dataset classification within an FL
setting, where the best step sizes for each (client) optimizer was used after grid-search; we defer the
experimental details to Section 4. Hence, all methods perform reasonably well, although ∆-SGD,
our proposed method, achieves noticeably better test accuracy when compared to non-adaptive SGD
variants –e.g., a 5% difference in final classification accuracy from SGDM– and comparable final
accuracy only with adaptive SGD variants, like Adam and Adagrad.
In Figure 1(B), we train a shallow CNN for MNIST classification, using the same step sizes from (A).
MNIST classification is now considered an “easy” task, and therefore SGD with the same constant
and decaying step sizes from Figure 1(A) works well. However, with momentum, SGDM exhibits
highly oscillating behavior, which results in slow progress and poor final accuracy, especially without
decaying the step size. Adaptive optimizers, like Adam and Adagrad, show similar behavior, falling
short in achieving good final accuracy, compared to their performance in the case of Figure 1(A).
Similarly, in Figure 1(C), we plot the test accuracy progress for CIFAR-100 classification trained
on a ResNet-50, again using the same step size rules as before. Contrary to Figure 1(B), SGD with
momentum (SGDM) works better than SGD, both with the constant and the decaying step sizes.
Adam becomes a “good optimizer” again, but its “sibling,” Adagrad, performs worse than SGDM.
On the contrary, our proposed method, ∆-SGD, which we introduce in Section 3, achieves superior
performance in all cases without any additional tuning.
The above empirical observations beg answers to important and non-trivial questions in training FL
tasks using variants of SGD methods as the client optimizer: Should the momentum be used? Should
the step size be decayed? If so, when? Unfortunately, Figure 1 indicates that the answers to these
questions highly vary depending on the setting; once the dataset itself or how the dataset is distributed
among different clients changes, or once the model architecture changes, the client optimizers have to
be properly re-tuned to ensure good performance. Perhaps surprisingly, the same holds for adaptive
methods like Adagrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014).
Our hypothesis and contributions. Our paper takes a stab in this direction: we propose DELTA-
SGD (DistributEd LocaliTy Adaptive SGD), a simple adaptive distributed SGD scheme, that can
automatically tune its step size based on the available local data. We will refer to our algorithm as
∆-SGD in the rest of the text. Our contributions can be summarized as follows:

2
Published as a conference paper at ICLR 2024

• We propose ∆-SGD, which has two implications in FL settings: i) each client can use its own step
size, and ii) each client’s step size adapts to the local smoothness of fi –hence LocaliTy Adaptive–
and can even increase during local iterations. Moreover, due to the simplicity of the proposed step
size, ∆-SGD is agnostic to the loss function and the server optimizer, and thus can be combined
with methods that use different loss functions such as FedProx (Li et al., 2020) or MOON (Li et al.,
2021), or adaptive server methods such as FedAdam (Reddi et al., 2021).
• We provide convergence analysis of ∆-SGD in a general nonconvex setting (Theorem 1). We
also prove convergence in convex setting; due to space constraints, we only state the result of the
general nonconvex case in Theorem 1, and defer other theorems and proofs to Appendix A.
• We evaluate our approach on several benchmark datasets and demonstrate that ∆-SGD achieves
superior performance compared to other state-of-the-art FL methods. Our experiments show
that ∆-SGD can effectively adapt the client step size to the underlying local data distribution,
and achieve convergence of the global model without any additional tuning. Our approach can
help overcome the client step size tuning challenge in FL and enable more efficient and effective
collaborative learning in distributed systems.

2 P RELIMINARIES AND R ELATED W ORK


A bit of background on optimization theory. Arguably the most fundamental optimization algorithm
for minimizing a function f (x) : Rd → R is the gradient descent (GD), which iterates with the step
size ηt as: xt+1 = xt − ηt ∇f (xt ). Under the assumption that f is globally L-smooth, that is:
∥∇f (x) − ∇f (y)∥ ⩽ L · ∥x − y∥ ∀x, y ∈ Rd , (2)
1
the step size ηt = L is the “optimal” step size for GD, which converges for convex f at the rate:
L∥x0 − x⋆ ∥2
f (xt+1 ) − f (x⋆ ) ⩽ . (3)
2(2t + 1)

Limitations of popular step sizes. In practice, however, the constants like L are rarely known (Boyd
et al., 2004). As such, there has been a plethora of efforts in the optimization community to develop
a universal step size rule or method that does not require a priori knowledge of problem constants
so that an optimization algorithm can work as “plug-and-play” (Nesterov, 2015; Orabona & Pál,
2016; Levy et al., 2018). A major lines of work on this direction include: i) line-search (Armijo,
1966; Paquette & Scheinberg, 2020), ii) adaptive learning rate (e.g., Adagrad (Duchi et al., 2011)
and Adam (Kingma & Ba, 2014)), and iii) Polyak step size (Polyak, 1969), to name a few.
Unfortunately, the pursuit of finding the “ideal” step size is still active (Loizou et al., 2021; Defazio
& Mishchenko, 2023; Bernstein et al., 2023), as the aforementioned approaches have limitations.
For instance, to use line search, solving a sub-routine is required, inevitably incurring additional
evaluations of the function and/or the gradient. To use methods like Adagrad and Adam, the
knowledge of D, the distance from the initial point to the solution set, is required to ensure good
performance (Defazio & Mishchenko, 2023). Similarly, Polyak step size requires knowledge of f (x⋆ )
(Hazan & Kakade, 2019), which is not always known a priori, and is unclear what value to use for
an arbitrary loss function and model; see also Section 4 where not properly estimating D or f (x⋆ )
resulting in suboptimal performance of relevant client optimizers in FL scenarios.
Implications in distributed/FL scenarios. The global L-smoothness in (2) needs to be satisfied for
all x and y, implying even finite L can be arbitrarily large. This results in a small step size, leading
to slow convergence, as seen in (3). Malitsky & Mishchenko (2020) attempted to resolve this issue
by proposing a step size for (centralized) GD that depends on the local smoothness of f , which by
definition is smaller than the global smoothness constant L in (2). ∆-SGD is inspired from this work.
Naturally, the challenge is aggravated in the distributed case. To solve (1) under the assumption that
each fi is Li -smooth, the step size of the form 1/Lmax where Lmax := maxi Li is often used (Yuan
et al., 2016; Scaman et al., 2017; Qu & Li, 2019), to ensure convergence (Uribe et al., 2020). Yet,
one can easily imagine a situation where Li ≫ Lj for i ̸= j, in which case the convergence of fj (x)
can be arbitrarily slow by using step size of the form 1/Lmax .
Therefore, to ensure that each client learns useful information in FL settings as in (1), ideally: i) each
agent should be able to use its own step size, instead of crude ones like 1/Lmax for all agents; ii) the

3
Published as a conference paper at ICLR 2024

individual step size should be “locally adaptive” to the function fi , (i.e., even 1/Li can be too crude,
analogously to the centralized GD); and iii) the step size should not depend on problem constants
like Li or µi , which are often unknown. In Section 3, we introduce DistributEd LocaliTy Adaptive
SGD (∆-SGD), which satisfies all of the aforementioned desiderata.
Related work on FL. The FL literature is vast; here, we focus on the results that closely relate to our
work. The FedAvg algorithm (McMahan et al., 2017) is one of the simplest FL algorithms, which
averages client parameters after some local iterations. Reddi et al. (2021) showed that FedAvg is
a special case of a meta-algorithm, where both the clients and the server use SGD optimizer, with
the server learning rate being 1. To handle data heterogeneity and model drift, they proposed using
adaptive optimizers at the server. This results in algorithms such as FedAdam and FedYogi, which
respectively use the Adam (Kingma & Ba, 2014) and Yogi (Zaheer et al., 2018) as server optimizers.
Our approach is orthogonal and complimentary to this, as we propose a client adaptive optimizer that
can be easily combined with these server-side aggregation methods (c.f., Appendix B.4).
Other approaches have handled the heterogeneity by changing the loss function. FedProx (Li et al.,
2020) adds an ℓ2 -norm proximal term to to handle data heterogeneity. Similarly, MOON (Li et al.,
2021) uses the model-contrastive loss between the current and previous models. Again, our proposed
method can be seamlessly combined with these approaches (c.f., Table 2b, Appendix B.6 and B.5).
The closest related work to ours is a concurrent work by Mukherjee et al. (2023). There, authors
utilize the Stochastic Polyak Stepsize (Loizou et al., 2021) in FL settings. We do include SPS results
in Section 4. There are some previous works that considered client adaptivity, including AdaAlter
(Xie et al., 2019a) and Wang et al. (2021). Both works utilize an adaptive client optimizer similar to
Adagrad. However, AdaAlter incurs twice the memory compared to FedAvg, as AdaAlter requires
communicating both the model parameters as well as the “client accumulators”; similarly, Wang et al.
(2021) requires complicated local and global correction steps to achieve good performance. In short,
previous works that utilize client adaptivity require modification in server-side aggregation; without
such heuristics, Adagrad (Duchi et al., 2011) exhibits suboptimal performance, as seen in Table 1.
Lastly, another line of works attempts to handle heterogeneity by utilizing control-variates (Pra-
neeth Karimireddy et al., 2019; Liang et al., 2019; Karimireddy et al., 2020), a similar idea to variance
reduction from single-objective optimization (Johnson & Zhang, 2013; Gower et al., 2020). While
theoretically appealing, these methods either require periodic full gradient computation or are not
usable under partial client participation, as all the clients must maintain a state throughout all rounds.

3 DELTA(∆)-SGD: D ISTRIBUTED LOCALITY ADAPTIVE SGD

We now introduce ∆-SGD. In its simplest form, client i at communication round t uses the step size:

∥xit −xit−1 ∥
n q o
ηti = min 2∥∇fi (xit )−∇fi (xit−1 )∥
, i
1 + θt−1 i
ηt−1 , i
θt−1 i
= ηt−1 i
/ηt−2 . (4)

The first part of min{·, ·} approximates the (inverse of) local smoothness of fi ,1 and the second part
controls how fast ηti can increase. Indeed, ∆-SGD with (4) enjoys the following decrease in the
Lyapunov function: m m
1 X i 2 Xh i i
∥xt+1 − x⋆ ∥2 + ∥xt+1 − xit ∥2 + i
ηt+1 θt+1 fi (xit ) − fi (x⋆ )
2m i=1 m i=1
m m
1 X i 2 Xh i i i
⩽ ∥xt − x⋆ ∥2 + ∥xt − xit−1 ∥2 + ηt θt fi (xit−1 ) − fi (x⋆ ) , (5)
2m i=1 m i=1

when fi ’s are assumed to be convex (c.f., Theorem 5 in Appendix A). For the FL settings, we extend
(4) by including stochasticity and local iterations, as summarized in Algorithm 1 and visualized in
˜ i (x) = 1 P
Figure 7. For brevity, we use ∇f ∇F i (x, z) to denote the stochastic gradients with
|B| z∈B
batch size |B| = b.

1
Notice that ∥∇fi (xit ) − ∇fi (xit−1 )∥ ⩽ 1
2ηti
∥xit − xit−1 ∥ ≈ L̃i,t ∥xit − xit−1 ∥.

4
Published as a conference paper at ICLR 2024

Algorithm 1 DELTA(∆)-SGD: DistributEd LocaliTy Adaptive SGD


1: input: x0 ∈ Rd , η0 , θ0 , γ > 0, and p ∈ (0, 1).
2: for each round t = 0, 1, . . . , T −1 do
3: sample a subset St of clients with size |St | = p · m
4: for each machine in parallel for i ∈ St do
5: set xit,0 = xt
i i
6: set ηt,0 = η0 and θt,0 = θ0
7: for local step k ∈ [K] do
xit,k = xit,k−1 − ηt,k−1
i ˜ i (xi
∇f
8: t,k−1 ) ▷ update local parameter with ∆-SGD
γ∥xit,k −xit,k−1 ∥
n q o
i i i
9: ηt,k = min ˜ ˜ i (xi
2∥∇fi (xit,k )−∇f
, 1 + θt,k−1 ηt,k−1
t,k−1 )∥
i i i
10: θt,k = ηt,k /ηt,k−1 ▷ (line 9 & 10) update locality adaptive step size
11: end for
12: end for
xt+1 = |S1t | i∈St xit,K
P
13: ▷ server-side aggregation
14: end for
15: return xT

We make a few remarks of Algorithm 1. First, the input θ0 > 0 can be quite arbitrary, as it can be
corrected, per client level, in the first local iteration (line 10); similarly for η0 > 0, although η0 should
be sufficiently small to prevent divergence in the first local step. Second, we include the “amplifier”
γ to the first condition of step size (line 9), but this is only needed for Theorem 1.2 Last, ∇f ˜ i (xi
t,k−1 )
i i
shows up twice: in updating xt,k (line 8) and ηt,k (line 9). Thus, one can use the same or different
batches; we use the same batches in experiments to prevent additional gradient evaluations.

3.1 C ONVERGENCE A NALYSIS

Technical challenges. The main difference of analyzing Algorithm 1 compared to other decentralized
i
optimization algorithms is that the step size ηt,k not only depends on the round t and local steps k, but
also on i, the client. To deal with the client-dependent step size, we require a slightly non-standard
assumption on the dissimilarity between f and fi , as detailed below.
Assumption 1. There exist nonnegative constants σ, ρ, and G such that for all i ∈ [M ] and x ∈ Rd ,

E∥∇Fi (x, z) − ∇fi (x)∥2 ⩽ σ 2 , (bounded variance) (1a)


∥∇fi (x)∥ ⩽ G, (bounded gradient) (1b)
∥∇fi (x) − ∇f (x)∥2 ⩽ ρ∥∇f (x)∥2 . (strong growth of dissimilarity) (1c)

Assumption 1a has been standard in stochastic optimization literature (Ghadimi & Lan, 2013; Stich,
2018; Khaled et al., 2020); recently, this assumption has been relaxed to weaker ones such as expected
smoothness (Gower et al., 2021), but this is out of scope of this work. Assumption 1b is fairly standard
in nonconvex optimization (Zaheer et al., 2018; Ward et al., 2020), and often used in FL setting (Xie
et al., 2019a;b; Reddi et al., 2021). Assumption 1c is reminiscent of the strong growth assumption in
stochastic optimization (Schmidt & Roux, 2013), which is still used in recent works (Cevher & Vũ,
2019; Vaswani et al., 2019a;b). To the best of our knowledge, this is the first theoretical analysis of
the FL setting where clients can use their own step size. We now present the main theorem.
1
Theorem 1. Let Assumption 1 hold, with ρ = O(1). Further, suppose that γ = O( K √ T
), and
η0 = O(γ). Then, the following property holds for Algorithm 1, for T sufficiently large:
T −1
! !
L̃2 Ψ2 L̃3 Ψ2
 
1 X 2 Ψ1
E ∥∇f (xt )∥ ⩽ O √ +O +O √ ,
T t=0 T T T3

2
For all our experiments, we use the default value γ = 2 from the original implementation in https:
//github.com/ymalitsky/adaptive_GD/blob/master/pytorch/optimizer.py.

5
Published as a conference paper at ICLR 2024

n o  2 
σ2
where Ψ1 = max b , f (x0 ) − f (x⋆ ) and Ψ2 = σb + G2 are global constants, with b = |B|
being the batch size; L̃ is a constant at most the maximum of local smoothness, i.e., maxi,t L̃i,t ,
where L̃i,t the local smoothness of fi at round t.

The convergence result in Theorem 1 implies a sublinear convergence to an ε-first order stationary
point with at least T = O(ε−2 ) communication rounds. We remind again that the conditions on γ
and η0 are only required for our theory.3 Importantly, we did not assume fi is globally L-smooth, a
standard assumption in nonconvex optimization and FL literature (Ward et al., 2020; Koloskova et al.,
2020; Li et al., 2020; Reddi et al., 2021); instead, we can obtain a smaller quantity, L̃, through our
analysis; for space limitation, we defer the details and the proof to Appendix A.

4 E XPERIMENTAL S ETUP AND R ESULTS

We now introduce the experimental setup and discuss the results. Our implementation can be found
in https://github.com/jlylekim/auto-tuned-FL.
Datasets and models. We consider image classification and text classification tasks. We use four
datasets for image classification: MNIST, FMNIST, CIFAR-10, and CIFAR-100 (Krizhevsky et al.,
2009). For MNIST and FMNIST, we train a shallow CNN with two convolutional and two fully-
connected layers, followed by dropout and ReLU activations. For CIFAR-10, we train a ResNet-18
(He et al., 2016). For CIFAR-100, we train both ResNet-18 and ResNet-50 to study the effect of
changing the model architecture. For text classification, we use two datasets: DBpedia and AGnews
datasets (Zhang et al., 2015), and train a DistillBERT (Sanh et al., 2019) for classification.
For each dataset, we create a federated version by randomly partitioning the training data among
100 clients (50 for text classification), with each client getting 500 examples. To control the level of
non-iidness, we apply latent Dirichlet allocation (LDA) over the labels following Hsu et al. (2019),
where the degree class heterogeneity can be parameterized by the Dirichlet concentration parameter
α. The class distribution of different α’s is visualized in Figure 8 in Appendix B.9.
FL setup and optimizers. We fix the number of clients to be 100 for the image classification, and
50 for the text classification; we randomly sample 10% as participating clients. We perform E local
epochs of training over each client’s dataset. We utilize mini-batch gradients of size b = 64 for the
image classification, and b = 16 for the text classification, leading to K ≈ ⌊ E·500
b ⌋ local gradient
steps; we use E = 1 for all settings. For client optimizers, we compare stochastic gradient descent
(SGD), SGD with momentum (SGDM), adaptive methods including Adam (Kingma & Ba, 2014),
Adagrad (Duchi et al., 2011), SGD with stochastic Polyak step size (SPS) (Loizou et al., 2021), and
our proposed method: ∆-SGD in Algorithm 1. As our focus is on client adaptivity, we mostly present
the results using FedAvg (McMahan et al., 2017) as the server optimizer; additional results using
FedAdam can be found in Appendix B.4.
Hyperparameters. For each optimizer, we perform a grid search of learning rates on a single task:
CIFAR-10 classification trained with a ResNet-18, with Dirichlet concentration parameter α = 0.1;
for the rest of the settings, we use the same learning rates. For SGD, we perform a grid search with
η ∈ {0.01, 0.05, 0.1, 0.5}. For SGDM, we use the same grid for η and use momentum parameter
β = 0.9. To properly account for the SGD(M) fine-tuning typically done in practice, we also test
dividing the step size by 10 after 50%, and again by 10 after 75% of the total training rounds (LR
decay). For Adam and Adagrad, we grid search with η ∈ {0.001, 0.01, 0.1}. For SPS, we use the
4
q of the official implementation. For ∆-SGD, we append δ in front of the second
default setting
i i
condition: 1 + δθt,k−1 ηt,k−1 following Malitsky & Mishchenko (2020), and use δ = 0.1 for all
experiments.5 Finally, for the number of rounds T , we use 500 for MNIST, 1000 for FMNIST, 2000
for CIFAR-10 and CIFAR-100, and 100 for the text classification tasks.

3
In all experiments, we use the default settings γ = 2, η0 = 0.2, and θ0 = 1 without additional tuning.
fi (x)−fi⋆
4
I.e., we use fi⋆ = 0, and c = 0.5, for the SPS step size: c∥∇f 2 . The official implementation can be
i (x)∥
found in https://github.com/IssamLaradji/sps.
5
We also demonstrate in Appendix B.1 that δ has very little impact on the final accuracy of ∆-SGD.

6
Published as a conference paper at ICLR 2024

4.1 R ESULTS

We clarify again that the best step size for each client optimizer was tuned via grid-search for a single
task: CIFAR-10 classification trained with a ResNet-18, with Dirichlet concentration parameter
α = 0.1. We then intentionally use the same step size in all other tasks to highlight two points: i)
∆-SGD works well without any tuning across different datasets, model architectures, and degrees of
heterogeneity; ii) other optimizers perform suboptimally without additional tuning.

Non-iidness Optimizer Dataset / Model


MNIST FMNIST CIFAR-10 CIFAR-100 CIFAR-100
Dir(α · p)
CNN CNN ResNet-18 ResNet-18 ResNet-50
SGD 98.3↓(0.2) 86.5↓(0.8) 87.7↓(2.1) 57.7↓(4.2) 53.0↓(12.8)
SGD (↓) 97.8↓(0.7) 86.3↓(1.0) 87.8↓(2.0) 61.9↓(0.0) 60.9↓(4.9)
SGDM 98.5↓(0.0) 85.2↓(2.1) 88.7↓(1.1) 58.8↓(3.1) 60.5↓(5.3)
SGDM (↓) 98.4↓(0.1) 87.2↓(0.1) 89.3↓(0.5) 61.4↓(0.5) 63.3↓(2.5)
α=1
Adam 94.7↓(3.8) 71.8↓(15.5) 89.4↓(0.4) 55.6↓(6.3) 61.4↓(4.4)
Adagrad 64.3↓(34.2) 45.5↓(41.8) 86.6↓(3.2) 53.5↓(8.4) 51.9↓(13.9)
SPS 10.1↓(88.4) 85.9↓(1.4) 82.7↓(7.1) 1.0↓(60.9) 50.0↓(15.8)
∆-SGD 98.4↓(0.1) 87.3↓(0.0) 89.8↓(0.0) 61.5↓(0.4) 65.8↓(0.0)
SGD 98.1↓(0.0) 83.6↓(2.8) 72.1↓(12.9) 54.4↓(6.7) 44.2↓(19.9)
SGD (↓) 98.0↓(0.1) 84.7↓(1.7) 78.4↓(6.6) 59.3↓(1.8) 48.7↓(15.4)
SGDM 97.6↓(0.5) 83.6↓(2.8) 79.6↓(5.4) 58.8↓(2.3) 52.3↓(11.8)
SGDM (↓) 98.0↓(0.1) 86.1↓(0.3) 77.9↓(7.1) 60.4↓(0.7) 52.8↓(11.3)
α = 0.1
Adam 96.4↓(1.7) 80.4↓(6.0) 85.0↓(0.0) 55.4↓(5.7) 58.2↓(5.9)
Adagrad 89.9↓(8.2) 46.3↓(40.1) 84.1↓(0.9) 49.6↓(11.5) 48.0↓(16.1)
SPS 96.0↓(2.1) 85.0↓(1.4) 70.3↓(14.7) 42.2↓(18.9) 42.2↓(21.9)
∆-SGD 98.1↓(0.0) 86.4↓(0.0) 84.5↓(0.5) 61.1↓(0.0) 64.1↓(0.0)
SGD 96.8↓(0.7) 79.0↓(1.2) 22.6↓(11.3) 30.5↓(1.3) 24.3↓(7.1)
SGD (↓) 97.2↓(0.3) 79.3↓(0.9) 33.9↓(0.0) 30.3↓(1.5) 24.6↓(6.8)
SGDM 77.9↓(19.6) 75.7↓(4.5) 28.4↓(5.5) 24.8↓(7.0) 22.0↓(9.4)
SGDM (↓) 94.0↓(3.5) 79.5↓(0.7) 29.0↓(4.9) 20.9↓(10.9) 14.7↓(16.7)
α = 0.01
Adam 80.8↓(16.7) 60.6↓(19.6) 22.1↓(11.8) 18.2↓(13.6) 22.6↓(8.8)
Adagrad 72.4↓(25.1) 45.9↓(34.3) 12.5↓(21.4) 25.8↓(6.0) 22.2↓(9.2)
SPS 69.7↓(27.8) 44.0↓(36.2) 21.5↓(12.4) 22.0↓(9.8) 17.4↓(14.0)
∆-SGD 97.5↓(0.0) 80.2↓(0.0) 31.6↓(2.3) 31.8↓(0.0) 31.4↓(0.0)

Table 1: Experimental results based on the settings detailed in Section 4. Best accuracy (± 0.5%) for
each task are shown in bold. Subscripts↓(x.x) is the performance difference from the best result, and
is highlighted in pink when it is bigger than 2%. The down-arrow symbol (↓) indicates step-wise
learning rate decay, where the step sizes are divided by 10 after 50%, and another by 10 after 75% of
the total rounds.

Figure 2: The effect of stronger heterogeneity on different client optimizers, induced by the Dirich-
let concentration parameter α ∈ {0.01, 0.1, 1}. ∆-SGD remains robust performance in all cases,
whereas other methods show significant performance degradation when changing the level of hetero-
geneity α, or when changing the setting (model/architecture).
7
Published as a conference paper at ICLR 2024

Changing the level of non-iidness. We first investigate how the performance of different client
optimizers degrade in increasing degrees of heterogeneity by varying the concentration parameter
α ∈ {1, 0.1, 0.01} multiplied to the prior of Dirichlet distribution, following Hsu et al. (2019).
Three illustrative cases are visualized in Figure 2. We remind that the step sizes are tuned for the
task in Figure 2(A). For this task, with α = 1 (i.e., closer to iid), all methods perform better than
α = 0.1, as expected. With α = 0.01, which is highly non-iid (c.f., Figure 8 in Appendix B.9), we
see a significant drop in performance for all methods. SGD with LR decay and ∆-SGD perform the
best, while adaptive methods like Adam (85% → 22.1%) and Adagrad (84.1% → 12.5%) degrade
noticeably more than other methods.
Figure 2(B) shows the results of training FMNIST with a CNN. This problem is generally considered
easier than CIFAR-10 classification. The experiments show that the performance degradation with
varying α’s is much milder in this case. Interestingly, adaptive methods such as Adam, Adagrad, and
SPS perform much worse than other methods
Lastly, in Figure 2(C), results for CIFAR-100 classification trained with a ResNet-50 are illustrated.
∆-SGD exhibits superior performance in all cases of α. Unlike MNIST and FMNIST, Adam enjoys
the second (α = 0.1) or the third (α = 1) best performance in this task, complicating how one should
“tune” Adam for the task at hand. Other methods, including SGD with and without momentum/LR
decay, Adagrad, and SPS, perform much worse than ∆-SGD.
Changing the model architecture/dataset. For this remark, let us focus on CIFAR-100 trained on
ResNet-18 versus on ResNet-50, with α = 0.1, illustrated in Figure 3(A). SGD and SGDM (both
with and without LR decay), Adagrad, and SPS perform worse using ResNet-50 than ResNet-18.
This is a counter-intuitive behavior, as one would expect to get better accuracy by using a more
powerful model, although the step sizes are tuned on ResNet-18. ∆-SGD is an exception: without
any additional tuning, ∆-SGD can improve its performance. Adam also improves similarly, but the
achieved accuracy is significantly worse than that of ∆-SGD.
We now focus on cases where the dataset changes, but the model architecture remains the same. We
mainly consider two cases, illustrated in Figure 3(B) and (C): When CNN is trained for classifying
MNIST versus FMNIST, and when ResNet-18 is trained for CIFAR-10 versus CIFAR-100. From (B),
one can infer the difficulty in tuning SGDM: without the LR decay, SGDM only achieves around 78%
accuracy for MNIST classification, while SGD without LR decay still achieves over 96% accuracy.
For FMNIST on the other hand, SGDM performs relatively well, but adaptive methods like Adam,
Adagrad, and SPS degrade significantly.
Aggravating the complexity of fine-tuning SGD(M) for MNIST, one can observe from (C) a similar
trend in CIFAR-10 trained with ResNet-18. In that case, ∆-SGD does not achieve the best test
accuracy (although it is the second best with a pretty big margin with the rest), while SGD with
LR decay does. However, without LR decay, the accuracy achieved by SGD drops more than 11%.
Again, adaptive methods like Adam, Adagrad, and SPS perform suboptimally in this case.

Figure 3: The effect of changing the dataset and the model architecture on different client optimizers.
∆-SGD remains superior performance without additional tuning when model or dataset changes,
whereas other methods often degrades in performance. (A): CIFAR-100 trained with ResNet-18
versus Resnet-50 (α = 0.1), (B): MNIST versus FMNIST trained with CNN (α = 0.01), (C):
CIFAR-10 versus CIFAR-100 trained with ResNet-18 (α = 0.01).

8
Published as a conference paper at ICLR 2024

Changing the domain: text classification. In Table 2a, we make a bigger leap, and test the
performance of different client optimizers for text classification tasks. We use the DBpedia and
AGnews datasets (Zhang et al., 2015), and train a DistillBERT (Sanh et al., 2019) for classification.
Again, the best step size for each client optimizer was tuned for the CIFAR-10 image classification
trained with a ResNet-18, for α = 0.1. Thus, the type of the dataset changed completely from image
to text, as well as the model architecture: from vision to language.
Not surprisingly, four methods (SGDM, SGDM(↓), Adam, and Adagrad) ended up learning nothing
(i.e., the achieved accuracy is 1/(# of labels)), indicating the fine-tuned step sizes for these methods
using CIFAR-10 classification task was too large for the considered text classification tasks. ∆-SGD,
on the other hand, still achieves competitive accuracies without additional tuning, thanks to the locality
adaptive step size. Interestingly, SGD, SGD(↓), and SPS worked well for the text classification tasks,
which is contrary to their suboptimal performances for image classification tasks in Table 1.
Changing the loss function. In Table 2b, we illustrate how ∆-SGD performs when combined with
a different loss function, such as FedProx (Li et al., 2020). FedProx aims to address heterogeneity;
we thus used the lowest concentration parameter, α = 0.01. We only present a subset of results:
CIFAR-10(100) classification with ResNet-18(50) (additional results in Appendix B.6). The results
show that ∆-SGD again outperforms all other methods with a significant margin: ∆-SGD is not only
the best, but also the performance difference is at least 2.5%, and as large as 26.9%. Yet, compared to
the original setting without the proximal term in Table 1, it is unclear whether or not the additional
proximal term helps. For ∆-SGD, the accuracy increased for CIFAR-10 with ResNet-18, but slightly
got worse for CIFAR-100 with ResNet-50; a similar story applies to other methods.

Text classification Dataset / Model FedProx Dataset / Model


α=1 Agnews Dbpedia α = 0.01 CIFAR-10 CIFAR-100
Optimizer DistillBERT Optimizer ResNet-18 ResNet-50
SGD 91.1↓(0.5) 96.0↓(2.9) SGD 20.0↓(13.8) 25.2↓(5.9)
SGD (↓) 91.6↓(0.0) 98.7↓(0.2) SGD (↓) 31.3↓(2.5) 20.2↓(10.8)
SGDM 25.0↓(66.6) 7.1↓(91.8) SGDM 29.3↓(4.4) 23.8↓(7.2)
SGDM (↓) 25.0↓(66.6) 7.1↓(91.8) SGDM (↓) 25.3↓(8.5) 15.0↓(16.0)
Adam 25.0↓(66.6) 7.1↓(91.8) Adam 28.1↓(5.7) 22.6↓(8.4)
Adagrad 25.0↓(66.6) 7.1↓(91.8) Adagrad 19.3↓(14.5) 4.1↓(26.9)
SPS 91.5↓(0.1) 98.9↓(0.0) SPS 27.6↓(6.2) 16.5↓(14.5)
∆-SGD 90.7↓(0.9) 98.6↓(0.3) ∆-SGD 33.8↓(0.0) 31.0↓(0.0)

(a) Experimental results for text classification (b) Experimental results using FedProx loss.

Table 2: Additional experiments for a different domain (a), and a different loss function (b).

Other findings. We present all the omitted theorems and proofs in Appendix A, and additional
empirical findings in Appendix B. Specifically, we demonstrate: the robustness of the performance
of ∆-SGD (Appendix B.1), the effect of the different numbers of local iterations (Appendix B.2),
additional experiments when the size of local dataset is different per client (Appendix B.3), additional
experimental results using FedAdam (Reddi et al., 2021) (Appendix B.4), additional experiential
results using MOON (Appendix B.5) and FedProx (Appendix B.6) loss functions, and three indepen-
dent trials for a subset of tasks (Appendix B.7). We also visualize the step size conditions for ∆-SGD
(Appendix B.8) as well as the degree of heterogeneity (Appendix B.9).

5 C ONCLUSION
In this work, we proposed ∆-SGD, a distributed SGD scheme equipped with an adaptive step size
that enables each client to use its own step size and adapts to the local smoothness of the function
each client is optimizing. We proved the convergence of ∆-SGD in the general nonconvex setting and
presented extensive empirical results, where the superiority of ∆-SGD is shown in various scenarios
without any tuning. For future works, extending ∆-SGD to a coordinate-wise step size in the spirit of
Duchi et al. (2011); Kingma & Ba (2014), applying the step size to more complex algorithms like
(local) factored gradient descent (Kim et al., 2022a), and enabling asynchronous updates (Assran
et al., 2020; Toghani et al., 2022; Nguyen et al., 2022) could be interesting directions.

9
Published as a conference paper at ICLR 2024

ACKNOWLEDGMENTS
This work is supported by NSF FET: Small No. 1907936, NSF MLWiNS CNS No. 2003137 (in
collaboration with Intel), NSF CMMI No. 2037545, NSF CAREER award No. 2145629, NSF CIF
No. 2008555, Rice InterDisciplinary Excellence Award (IDEA), NSF CCF No. 2211815, NSF DEB
No. 2213568, and Lodieska Stockbridge Vaughn Fellowship.

R EFERENCES
Naman Agarwal, Ananda Theertha Suresh, Felix Xinnan X Yu, Sanjiv Kumar, and Brendan McMahan.
cpSGD: Communication-efficient and differentially-private distributed SGD. Advances in Neural
Information Processing Systems, 31, 2018.
Larry Armijo. Minimization of functions having lipschitz continuous first partial derivatives. Pacific
Journal of mathematics, 16(1):1–3, 1966.
Mahmoud Assran and Michael Rabbat. On the convergence of nesterov’s accelerated gradient method
in stochastic settings. arXiv preprint arXiv:2002.12414, 2020.
Mahmoud Assran, Arda Aytekin, Hamid Reza Feyzmahdavian, Mikael Johansson, and Michael G
Rabbat. Advances in asynchronous parallel and distributed optimization. Proceedings of the IEEE,
108(11):2013–2031, 2020.
Jeremy Bernstein, Chris Mingard, Kevin Huang, Navid Azizan, and Yisong Yue. Automatic gradient
descent: Deep learning without hyperparameters. arXiv preprint arXiv:2304.05187, 2023.
Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge
university press, 2004.
Volkan Cevher and Bang Công Vũ. On the linear convergence of the stochastic gradient method with
constant step-size. Optimization Letters, 13(5):1177–1187, 2019.
Aaron Defazio and Konstantin Mishchenko. Learning-rate-free learning by d-adaptation. In Interna-
tional Conference on Machine Learning, pp. 7449–7479. PMLR, 2023.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of machine learning research, 12(7), 2011.
Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic
programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013.
Robert M Gower, Mark Schmidt, Francis Bach, and Peter Richtárik. Variance-reduced methods for
machine learning. Proceedings of the IEEE, 108(11):1968–1983, 2020.
Robert M Gower, Peter Richtárik, and Francis Bach. Stochastic quasi-gradient methods: Variance
reduction via jacobian sketching. Mathematical Programming, 188:135–192, 2021.
Elad Hazan and Sham Kakade. Revisiting the polyak step size. arXiv preprint arXiv:1905.00313,
2019.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition,
pp. 770–778, 2016.
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data
distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.
Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. Advances in neural information processing systems, 26, 2013.
Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U
Stich, and Ananda Theertha Suresh. Mime: Mimicking centralized stochastic algorithms in
federated learning. arXiv preprint arXiv:2008.03606, 2020.

10
Published as a conference paper at ICLR 2024

Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Tighter theory for local sgd on identical
and heterogeneous data. In International Conference on Artificial Intelligence and Statistics, pp.
4519–4529. PMLR, 2020.
Junhyung Lyle Kim, Mohammad Taha Toghani, César A Uribe, and Anastasios Kyrillidis. Local
stochastic factored gradient descent for distributed quantum state tomography. IEEE Control
Systems Letters, 7:199–204, 2022a.
Junhyung Lyle Kim, Panos Toulis, and Anastasios Kyrillidis. Convergence and stability of the
stochastic proximal point algorithm with momentum. In Learning for Dynamics and Control
Conference, pp. 1034–1047. PMLR, 2022b.
Junhyung Lyle Kim, Taha Toghani, Cesar A Uribe, and Anastasios Kyrillidis. Adaptive federated
learning with auto-tuned clients via local smoothness. In Federated Learning and Analytics in
Practice: Algorithms, Systems, Applications, and Opportunities, 2023.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified
theory of decentralized sgd with changing topology and local updates. In International Conference
on Machine Learning, pp. 5381–5393. PMLR, 2020.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
Guanghui Lan, Soomin Lee, and Yi Zhou. Communication-efficient algorithms for decentralized and
stochastic optimization. Mathematical Programming, 180(1-2):237–284, 2020.
Kfir Y Levy, Alp Yurtsever, and Volkan Cevher. Online adaptive methods, universality and accelera-
tion. Advances in neural information processing systems, 31, 2018.
Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10713–10722, 2021.
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith.
Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems,
2:429–450, 2020.
Xianfeng Liang, Shuheng Shen, Jingchang Liu, Zhen Pan, Enhong Chen, and Yifei Cheng. Variance
reduced local sgd with lower communication complexity. arXiv preprint arXiv:1912.12844, 2019.
Nicolas Loizou, Sharan Vaswani, Issam Hadj Laradji, and Simon Lacoste-Julien. Stochastic polyak
step-size for sgd: An adaptive learning rate for fast convergence. In International Conference on
Artificial Intelligence and Statistics, pp. 1306–1314. PMLR, 2021.
Yura Malitsky and Konstantin Mishchenko. Adaptive gradient descent without descent. In Proceed-
ings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of
Machine Learning Research, pp. 6702–6712. PMLR, 2020.
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas.
Communication-efficient learning of deep networks from decentralized data. In Artificial intelli-
gence and statistics, pp. 1273–1282. PMLR, 2017.
Sohom Mukherjee, Nicolas Loizou, and Sebastian U Stich. Locally adaptive federated learning via
stochastic polyak stepsizes. arXiv preprint arXiv:2307.06306, 2023.
Yu Nesterov. Universal gradient methods for convex optimization problems. Mathematical Program-
ming, 152(1-2):381–404, 2015.
John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Mike Rabbat, Mani Malek, and
Dzmitry Huba. Federated learning with buffered asynchronous aggregation. In International
Conference on Artificial Intelligence and Statistics, pp. 3581–3607. PMLR, 2022.
Francesco Orabona and Dávid Pál. Coin betting and parameter-free online learning. Advances in
Neural Information Processing Systems, 29, 2016.

11
Published as a conference paper at ICLR 2024

Courtney Paquette and Katya Scheinberg. A stochastic line search method with expected complexity
analysis. SIAM Journal on Optimization, 30(1):349–376, 2020.
Boris Teodorovich Polyak. Minimization of nonsmooth functionals. Zhurnal Vychislitel’noi Matem-
atiki i Matematicheskoi Fiziki, 9(3):509–521, 1969.
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and
Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. arXiv
e-prints, pp. arXiv–1910, 2019.
Guannan Qu and Na Li. Accelerated distributed nesterov gradient descent. IEEE Transactions on
Automatic Control, 65(6):2566–2581, 2019.
Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný,
Sanjiv Kumar, and Hugh Brendan McMahan. Adaptive federated optimization. In International
Conference on Learning Representations, 2021.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of
bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, and Laurent Massoulié. Optimal
algorithms for smooth and strongly convex distributed optimization in networks. In international
conference on machine learning, pp. 3027–3036. PMLR, 2017.
Mark Schmidt and Nicolas Le Roux. Fast convergence of stochastic gradient descent under a strong
growth condition. arXiv preprint arXiv:1308.6370, 2013.
Sebastian U Stich. Local sgd converges fast and communicates little. arXiv preprint arXiv:1805.09767,
2018.
Mohammad Taha Toghani, Soomin Lee, and César A Uribe. PersA-FL: Personalized Asynchronous
Federated Learning. arXiv preprint arXiv:2210.01176, 2022.
Panos Toulis and Edoardo M Airoldi. Asymptotic and finite-sample properties of estimators based on
stochastic gradients. 2017.
César A Uribe, Soomin Lee, Alexander Gasnikov, and Angelia Nedić. A dual approach for optimal
algorithms in distributed optimization over networks. In 2020 Information Theory and Applications
Workshop (ITA), pp. 1–37. IEEE, 2020.
Sharan Vaswani, Francis Bach, and Mark Schmidt. Fast and faster convergence of sgd for over-
parameterized models and an accelerated perceptron. In The 22nd international conference on
artificial intelligence and statistics, pp. 1195–1204. PMLR, 2019a.
Sharan Vaswani, Aaron Mishkin, Issam Laradji, Mark Schmidt, Gauthier Gidel, and Simon Lacoste-
Julien. Painless stochastic gradient: Interpolation, line-search, and convergence rates. Advances in
neural information processing systems, 32, 2019b.
Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, and Gauri Joshi. Local
adaptivity in federated learning: Convergence and consistency. arXiv preprint arXiv:2106.02305,
2021.
Rachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex
landscapes. The Journal of Machine Learning Research, 21(1):9047–9076, 2020.
Cong Xie, Oluwasanmi Koyejo, Indranil Gupta, and Haibin Lin. Local adaalter: Communication-
efficient stochastic gradient descent with adaptive learning rates. arXiv preprint arXiv:1911.09030,
2019a.
Cong Xie, Sanmi Koyejo, and Indranil Gupta. Asynchronous federated optimization. arXiv preprint
arXiv:1903.03934, 2019b.
Kun Yuan, Qing Ling, and Wotao Yin. On the convergence of decentralized gradient descent. SIAM
Journal on Optimization, 26(3):1835–1854, 2016.

12
Published as a conference paper at ICLR 2024

Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, and Sanjiv Kumar. Adaptive methods
for nonconvex optimization. Advances in neural information processing systems, 31, 2018.
Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text
classification. Advances in neural information processing systems, 28, 2015.

13
Published as a conference paper at ICLR 2024

S UPPLEMENTARY M ATERIALS FOR


“A DAPTIVE F EDERATED L EARNING WITH AUTO -T UNED C LIENTS ”

In this appendix, we provide all missing proofs that were not present in the main text, as well as
additional plots and experiments. The appendix is organized as follows.

• In Appendix A, we provide all the missing proofs. In particular:


– In Section A.1, we provide the proof of Algorithm 1, under Assumption 1.
– In Section A.2, we provide the proof of Algorithm, 1, with some modified assumptions,
including that fi is convex for all i ∈ [m].
• In Appendix B, we provide additional experiments, as well as miscellaneous plots that were missing
in the main text due to the space constraints. Specifically:
– In Section B.1, we provide some perspectives on the ease of tuning of ∆-SGD in Algorithm 1.
– In Section B.2, we provide some perspectives on how ∆-SGD performs using different number
of local iterations.
– In Section B.3, we provide additional experimental results where each client has different
number of samples as local dataset.
– In Section B.4, we provide additional experimental results using FedAdam (Reddi et al., 2021)
server-side adaptive method.
– In Section B.5, we provide additional experimental results using MOON (Li et al., 2021) loss
function.
– In Section B.6, we provide additional experiemtns using FedProx (Li et al., 2020) loss function.
– In Section B.7, we perform three independent trials for a subset of tasks for all client-optimizers.
We plot the average and the standard deviations in Figure 6, and report those values in Table 7.
– In Section B.8, we visualize the step size conditions for ∆-SGD.
– In Section B.9, we provide illustration of the different level of heterogeneity induced by the
Dirichlet concentration parameter α.

A M ISSING P ROOFS
We first introduce the following two inequalities, which will be used in the subsequent proofs:
1 1
⟨xi , xj ⟩ ⩽ ∥xi ∥2 + ∥xj ∥2 , and (7)
2 2
m 2 m
X X 2
xi ⩽m xi , (8)
i=1 i=1

for any set of m vectors {xi }m d


i=1 with xi ∈ R .

We also provide the following lemma, which establishes L̃-smoothness based on the local smoothness.
Lemma 2. For locally smooth function fi , the following inequality holds:
L̃i,t
fi (y) ⩽ fi (x) + ⟨∇fi (x), y − x⟩ + ∥y − x∥2 ∀x, y ∈ Si,t , (9)
2
where Si,t := conv({xit,k }K
k=1 ∪ {xt , xt+1 }).

Proof. First, notice that by Assumptions (1a) and (1b), stochastic gradients are also bounded for
all x ∈ Rd . Therefore, the local iterates, i.e., {xit,k }K
k=0 remain bounded. Hence, the set Si,t :=
conv({xit,k }K
k=1 ∪ {x t , xt+1 }), where conv(·) denotes the convex hull, is bounded. Due to the fact

14
Published as a conference paper at ICLR 2024

that fi is locally smooth, ∇f is Lipschitz continuous on bounded sets. This implies there exists a
positive constant L̃i,t such that,

∥∇fi (x) − ∇fi (y)∥ ⩽ L̃i,t ∥y − x∥ ∀x, y ∈ Si,t . (10)

Thus, we have

⟨∇fi (y) − ∇fi (x), y − x⟩ ⩽ ∥∇fi (y) − ∇fi (x)∥ · ∥y − x∥


⩽ L̃i,t ∥y − x∥2 , ∀x, y ∈ Si,t .

Let g(τ ) = fi (x + τ (y − x)). Then, we have

g ′ (τ ) = ⟨∇fi (x + τ (y − x)), y − x⟩
g (τ ) − g ′ (0) = ⟨∇fi (x + τ (y − x)) − ∇fi (x), y − x⟩

⩽ ∥∇fi (x + τ (y − x)) − ∇fi (x)∥ · ∥y − x∥


⩽ τ L̃i,t ∥y − x∥2 .

Using the mean value theorem, we have


Z 1
fi (y) = g(1) = g(0) + g ′ (τ )dτ
0
Z 1
⩽ g(0) + [g ′ (0) + τ L̃i,t ∥y − x∥2 ]dτ
0
L̃i,t
⩽ g(0) + g ′ (0) + ∥y − x∥2
2
L̃i,t
⩽ fi (x) + ⟨∇fi (x), y − x⟩ + ∥y − x∥2 .
2

Now, we define L̃i := maxt L̃i,t . Given this definition, we relate the local-smoothness constant L̃ of
the average function f with (individual) local-smoothness constants L̃i in the below lemma.
1
Pm
Lemma 3. The average of L̃i -local smooth functions fi are at least ( m i=1 L̃i )-local smooth.

Proof. Starting from (10), we have:


m m m
1 X 1 X 1 X 
∇fi (x) − ∇fi (y) = ∇fi (x) − ∇fi (y)
m i=1 m i=1 m i=1
m
1 X
⩽ ∇fi (x) − ∇fi (y)
m i=1
m
1 X
⩽ L̃i ∥x − y∥ (11)
m i=1
⩽ L̃∥x − y∥, where L̃ = max L̃i,t . (12)
i,t

Technically, (11) can provide a slightly tighter bound for Theorem 1, but (12) simplifies the final
expression. We also have the following form:


f (xt+1 ) ⩽ f (xt ) + ⟨∇f (xt ), xt+1 − xt ⟩ + ∥xt+1 − xt ∥2 ,
2
which can be obtained from Lemma 2.

15
Published as a conference paper at ICLR 2024

A.1 P ROOF OF A LGORITHM 1 UNDER A SSUMPTION 1

Proof. Based on Algorithm 1, we use the following notations throughout the proof:
xit,0 = xt ,
i ˜
xit,k = xit,k−1 − ηt,k ∇fi (xit,k−1 ), ∀k ∈ [K],
1 X i
xt+1 = xt,k ,
|St |
i∈St

where we denoted with xt+1 as the server parameter at round t + 1, which is the average of the local
parameters xit,k for all client ∀ i ∈ [m] after running k local iterations for ∀ k ∈ [K].

We also recall that we use ∇f ˜ i (x) = 1 P


|B| z∈B ∇Fi (x, z) to denote the stochastic gradients with
batch size |B| = b. Note that whenever we use this shorthand notation in the theory, we imply that
we draw a new independent batch of samples to compute the stochastic gradients.
We start from the smoothness of f (·), which we note again is obtained via Lemma 3, i.e., as a
byproduct of our analysis, and we do not assume that fi ’s are globally L-smooth. Also note that
Lemma 3 holds for {x0 , x1 , . . . }, i.e., the average of the local parameters visited by Algorithm 1.
Proceeding, we have
K−1 E L̃ 1 X K−1 2
D 1 X X i ˜ X
i ˜
f (xt+1 ) ⩽ f (xt ) − ∇f (xt ), ηt,k ∇fi (xit,k ) + ηt,k ∇fi (xit,k ) .
|St | 2 |St |
i∈St k=0 i∈St k=0

Taking expectation from the expression above, where we overload the notation such that the expec-
tation is both with respect to the client sampling randomness (indexed with i) as well as the data
sampling randomness (indexed with z), that is E(·) := EFt [Ei [Ez [· | Ft , i]|Ft ]] , where Ft is the
natural filtration, we have:

Ef (xt+1 )
K−1 K−1 2
D 1 X X i ˜ E L̃ 1 X X i ˜
⩽ f (xt ) − E ∇f (xt ), ηt,k ∇fi (xit,k ) + E ηt,k ∇fi (xit,k ) (13)
|St | 2 |St |
i∈St k=0 i∈St k=0
m K−1 m K−1 2
D 1 XX i E L̃ X X
i ˜
⩽ f (xt ) − E ∇f (xt ), ηt,k ∇fi (xit,k ) + E ηt,k ∇fi (xit,k ) (14)
m i=1 2m i=1
k=0 k=0
m K−1 i 
D 1 X X ηt,k E
= f (xt ) − Dt ∥∇f (xt )∥2 − Dt E ∇f (xt ), ∇fi (xit,k ) − ∇f (xt )
m i=1 Dt
k=0
m K−1 2
L̃ X X
i ˜
+ E ηt,k ∇fi (xit,k ) , (15)
2m i=1 k=0
1
Pm PK−1 i
where in the equality, we added and subtracted m i=1 k=0 ηt,k ∇f (xt ) in the inner product term,
1
P m P K−1 i
and also replaced Dt := m i=1 k=0 ηt,k . Then, using (7) and (13)-(15), we have
Dt
Ef (xt+1 ) ⩽ f (xt ) − Dt ∥∇f (xt )∥2 +
∥∇f (xt )∥2 (16)
2
m K−1 i  m K−1
Dt 1 X X ηt,k  2 L̃ X X
i ˜
2
+ E ∇fi (xit,k ) − ∇f (xt ) + E ηt,k ∇fi (xit,k )
2 m i=1 Dt 2m i=1
k=0 k=0
m K−1 2
Dt 1 1 X X  
= f (xt ) − ∥∇f (xt )∥2 + E i
ηt,k ∇fi (xit,k ) − ∇f (xt )
2 2Dt m i=1 k=0
m K−1 2
L̃ X X
i ˜
+ E ηt,k ∇fi (xit,k ) (17)
2m i=1 k=0

16
Published as a conference paper at ICLR 2024

Dt
= f (xt ) − ∥∇f (xt )∥2
2
m K−1 2
1 1 XX i  
+ E ηt,k ∇fi (xt ) − ∇fi (xit,k ) + ∇fi (xt ) − ∇f (xt )
2Dt m i=1
k=0
m K−1
L̃ X X
i

˜ i (xi ) − ∇fi (xi ) + ∇fi (xi ) − ∇fi (xt )
+ E ηt,k ∇f t,k t,k t,k
2m i=1
k=0
 2
+ ∇fi (xt ) − ∇f (xt ) + ∇f (xt ) (18)

We now expand the terms in (18) and bound each term separately, as follows.

Dt
Ef (xt+1 ) ⩽ f (xt ) − ∥∇f (xt )∥2
2
m K−1  2
1 1 XX i 
+ E ηt,k ∇fi (xt ) − ∇fi (xit,k )
Dt m i=1
k=0
| {z }
A1
m K−1  2
1 1 XX i 
+ E ηt,k ∇fi (xt ) − ∇f (xt )
Dt m i=1
k=0
| {z }
A2
m
2L̃ X
K−1
X   2
+ E i ˜ i i
ηt,k ∇fi (xt,k ) − ∇fi (xt,k )
m i=1
k=0
| {z }
A3
m
2L̃ X
K−1
X   2
i i
+ E ηt,k ∇fi (xt,k ) − ∇fi (xt )
m i=1
k=0
| {z }
A4
m
2L̃ X
K−1
X   2
i
+ E ηt,k ∇fi (xt ) − ∇f (xt )
m i=1
k=0
| {z }
A5
m K−1 2
2L̃ X X
i
+ E ηt,k ∇f (xt ) . (19)
m i=1
k=0
| {z }
A6

Now, we provide an upper bound for each term in (19). Starting with A1 , using (7), (8), and (12), we
have
m K−1 2
K XX i 2
A1 ⩽ (ηt,k ) E ∇fi (xt ) − ∇fi (xit,k ) (20)
Dt m i=1
k=0
m K−1
K L̃2 X X i 2 2
⩽ (ηt,k ) E xt − xit,k
Dt m i=1
k=0
m K−1
2 X k−1 2
K L̃ X
i
X
i ˜
= (ηt,k )2 E ηt,l ∇fi (xit,ℓ )
Dt m i=1 k=0 ℓ=0
2 m K−1 k−1 2
K L̃ X X
i
X 
˜ i (xit,ℓ ) − ∇fi (xit,ℓ ) + ∇fi (xit,ℓ )

= (ηt,k )2 E i
ηt,l ∇f
Dt m i=1 k=0 ℓ=0

17
Published as a conference paper at ICLR 2024

m K−1 k−1 k−1


2K L̃2 X X i 2 h X i  ˜  2 X 2i
⩽ (ηt,k ) E ηt,l ∇fi (xit,ℓ ) − ∇fi (xit,ℓ ) +E i
ηt,l ∇fi (xit,ℓ )
Dt m i=1
k=0 ℓ=0 ℓ=0
m K−1
2 k−1 2 2i
2K L̃ X X i
X h
˜ i (xit,ℓ ) − ∇fi (xit,ℓ )
⩽ k(ηt,k )2 i 2
(ηt,l ) E ∇f + E ∇fi (xit,ℓ )
Dt m i=1
k=0 ℓ=0
m K−1
2 X k−1  σ2
2K L̃ X
i
X 
⩽ k(ηt,k )2 i 2
(ηt,l ) + G2
Dt m i=1 k=0
b
ℓ=0
 
σ2
2K 2 L̃2 b + G2 X
m  K−1
X 2
i
⩽ (ηt,k )2 . (21)
Dt m i=1 k=0

For A2 , using (7) and (8), we have:


m K−1 2
K XX i 2
A2 ⩽ (ηt,k ) E ∇fi (xt ) − ∇f (xt )
Dt m i=1
k=0
m K−1
2X
Kρ X
i
⩽ ∇f (xt ) (ηt,k )2 . (22)
Dt m i=1 k=0

For A3 , using (7), (8), and (12), we have:


m K−1 2
2K L̃ X X i 2 ˜ i (xi ) − ∇fi (xi )
A3 ⩽ (ηt,k ) E ∇f t,k t,k
m i=1
k=0
m K−1
2
2K L̃σ X X i 2
⩽ (ηt,k ) . (23)
mb i=1
k=0

For A4 , using (7), (8), and (12), we have:


m K−1 2
2K L̃ X X i 2
A4 ⩽ (ηt,k ) E ∇fi (xt ) − ∇fi (xit,k ) (24)
m i=1
k=0
m K−1
2K L̃3 X X i 2 2
⩽ (ηt,k ) E xt − xit,k
m i=1
k=0
m K−1
3 k−1 2
2K L̃ X X i 2 X
i ˜
= (ηt,k ) E ηt,l ∇fi (xit,ℓ )
m i=1
k=0 ℓ=0
m K−1
3 k−1 2
2K L̃ X X i 2 X
i

˜ i (xit,ℓ ) − ∇fi (xit,ℓ ) + ∇fi (xit,ℓ )

= (ηt,k ) E ηt,l ∇f
m i=1
k=0 ℓ=0
m K−1
3 k−1 2 k−1 2i
4K L̃ X X i 2 h X i  ˜  X
⩽ (ηt,k ) E ηt,l ∇fi (xit,ℓ ) − ∇fi (xit,ℓ ) +E i
ηt,l ∇fi (xit,ℓ )
m i=1
k=0 ℓ=0 ℓ=0
m K−1
3 X k−1 2 2i
4K L̃ X
i
X h
˜ i (xit,ℓ ) − ∇fi (xit,ℓ )
⩽ k(ηt,k )2 i 2
(ηt,l ) E ∇f + E ∇fi (xit,ℓ )
m i=1 k=0 ℓ=0
3 m K−1 k−1  σ2
4K L̃ X X
i
X 
⩽ k(ηt,k )2 i 2
(ηt,l ) + G2
m i=1 k=0
b
ℓ=0
 2

σ
4K 2 L̃3 b + G2 X
m  K−1
X 2
i
⩽ (ηt,k )2 . (25)
m i=1 k=0

18
Published as a conference paper at ICLR 2024

For A5 , using (7), (8), and (12), we have:


m K−1 2
2K L̃ X X i 2
A5 ⩽ (ηt,k ) E ∇fi (xt ) − ∇f (xt )
m i=1
k=0
m K−1
2X
2K L̃ρ X
i
⩽ ∇f (xt ) (ηt,k )2 . (26)
m i=1 k=0

For A6 , using (7), (8), and (12), we have:


m K−1
2X
2K L̃ X
i
A6 ⩽ ∇f (xt ) (ηt,k )2 . (27)
m i=1 k=0

Putting all bounds together. Now, by putting (20)–(27) back into (19) and rearranging the terms,
we obtain the following inequality:
D m K−1 m K−1
t Kρ X X i 2 2K L̃(1 + ρ) X X i 2 
− (ηt,k ) − (ηt,k ) ∥∇f (xt )∥2
2 Dt m i=1 m i=1
k=0 k=0
 
2 2 σ2 2 m  K−1
2K L̃ b + G X X 2
i
⩽ f (xt ) − f (xt+1 ) + (ηt,k )2
Dt m i=1 k=0
 2 
m K−1 2 3 σ 2 m K−1
2K L̃σ 2 X X i 2 4K L̃ b + G X  X i 2 2
+ (ηt,k ) + (ηt,k ) . (28)
mb i=1 m i=1
k=0 k=0
i
Now, it remains to notice that, by definition, ηt,k = O(γ), for all i ∈ [m], k ∈ {0, 1, . . . K−1}, and
t ⩾ 0. Therefore, Dt = O(γK). Thus, by dividing the above inequality by D2t , we have:
m K−1 m K−1
 2Kρ X X i 2 4K L̃(1 + ρ) X X i 2 
1− (ηt,k ) − (ηt,k ) ∥∇f (xt )∥2
Dt2 m i=1 Dt m i=1 k=0
k=0
| {z } | {z }
O(1) O(γK)
m K−1
2(f (xt ) − f (xt+1 )) 4K L̃σ 2 X X i 2
⩽ + (ηt,k )
Dt Dt mb i=1
| {z } k=0
  | {z }
O 1 O(γK)
γK
 
σ2
4K 2 L̃2 b + G2 X
m  K−1
X 2
i
+ 2 (ηt,k )2
Dt m i=1 k=0
| {z }
O(γ 2 K 2 )
 
σ2
8K 2 L̃3 b + G2 Xm  K−1
X 2
i
+ (ηt,k )2 , (29)
Dt m i=1 k=0
| {z }
O(γ 3 K 3 )

1
for ρ = O(1). For the simplicity of exposition, assuming ρ ⩽ 4 and T ⩾ (5/ρ)2 = 400, we obtain
the below by averaging (29) for t = 0, 1, . . . , T −1:
T −1  f (x ) − f ⋆   γKσ 2    σ2
1 X 0
 
∥∇f (xt )∥2 ⩽ O +O + O L̃2 + G2 γ 2 K 2
T t=0 γKT b b
  σ2  
+ O L̃3 + G2 γ 3 K 3 . (30)
b

19
Published as a conference paper at ICLR 2024

  n 2 o
1 σ ⋆
Finally, by choosing γ = O K √ T
, and defining Ψ 1 = max b , f (x 0 ) − f (x ) and Ψ2 =
 2 
σ 2
b +G with b = |B| being the batch size, we arrive at the statement in Theorem 1, i.e.,

T −1  L̃2 Ψ   L̃3 Ψ 
1 X 2
Ψ 
1 2 2
E ∇f (xt ) ⩽O √ +O +O √ .
T t=0 T T T 3

A.2 P ROOF OF A LGORITHM 1 FOR CONVEX CASE

Here, we provide an extension of the proof of Malitsky & Mishchenko (2020, Theorem 1) for the
distributed version. Note that the proof we provide here is not exactly the same version of ∆-SGD
presented in Algorithm 1; in particular, we assume no local iterations, i.e., k = 1 for all i ∈ [m], and
every client participate, i.e, p = 1, without stochastic gradients.
We start by constructing a Lyapunov function for the distributed objective in (1).
Lemma 4. Let fi : Rd → R be convex with locally Lipschitz gradients. Then, the sequence {xt }
generated by Algorithm 1, assuming k = 1 and p = 1 with full batch, satisfy the following:
m m
1 X i 2 Xh i i
∥xt+1 − x⋆ ∥2 + ∥xt+1 − xit ∥2 + ηt (1 + θti ) fi (xit ) − fi (x⋆ )
2m i=1 m i=1
m m
1 X i
⋆ 2 i 2 2 Xh i i i
⩽ ∥xt − x ∥ + ∥xt − xt−1 ∥ + ηt θt fi (xit−1 ) − fi (x⋆ ) . (31)
2m i=1 m i=1

The above lemma, which we prove below, constructs a contracting Lyapunov function, which can be
seen from the second condition on the step size ηti :
q
i
ηt+1 ⩽ 1 + θti ηti =⇒ ηt+1 i i
θt+1 ⩽ (1 + θti )ηti .

1
Pm
Proof. We denote xt = m i=1 xit .
m
1 X i
∥xt+1 − x⋆ ∥2 = ∥xt − η ∇fi (xit ) − x⋆ ∥ (32)
m i=1 t
m 2 m
1 X i D 1 X i E
= ∥xt − x⋆ ∥2 + ηt ∇fi (xit ) − 2 xt − x⋆ , ηt ∇fi (xit ) (33)
m i=1 m i=1
Pm m
We will first bound the second term in (33). Using ∥ i=1 ai ∥2 ⩽ m · i=1 ∥ai ∥2 , we have
P

m 2 m 2 m
1 X i 1 X i 1 X i
η ∇fi (xit ) = (x − xit+1 ) ⩽ ∥x − xit+1 ∥2 . (34)
m i=1 t m i=1 t m i=1 t

To bound ∥xit − xit+1 ∥2 , observe that

∥xit − xit+1 ∥2 = 2∥xit − xit+1 ∥2 − ∥xit − xit+1 ∥2


= 2⟨−ηti ∇fi (xit ), xit − xit+1 ⟩ − ∥xit − xit+1 ∥2
= 2ηti ⟨∇fi (xit ) − ∇fi (xit−1 ), xit+1 − xit ⟩ − ∥xit − xit+1 ∥2
+ 2ηti ⟨∇fi (xit−1 ), xit+1 − xit ⟩ (35)
For the first term in (35), using Cauchy-Schwarz and Young’s inequalities as well as
∥xit −xit−1 ∥
ηti ⩽ 2∥∇fi (xit )−∇fi (xit−1 )∥
, we have:

2ηti ⟨∇fi (xit ) − ∇fi (xit−1 ), xit+1 − xit ⟩ ⩽ 2ηti ∥∇fi (xit ) − ∇fi (xit−1 )∥ · ∥xit+1 − xit ∥

20
Published as a conference paper at ICLR 2024

⩽ ∥xit − xit−1 ∥ · ∥xit+1 − xit ∥


1 1
⩽ ∥xit − xit−1 ∥2 + ∥xit+1 − xit ∥2 .
2 2
For the third term in (35), by convexity of fi (x), we have
ηi
2ηti ⟨∇fi (xit−1 ), xit+1 − xit ⟩ = 2 ηi t ⟨xit−1 − xit , xit+1 − xit ⟩
t−1

= 2θti ηti ⟨xit−1 − xit , ∇fi (xit )⟩


⩽ 2θti ηti [fi (xit−1 ) − fi (xit )].
Together, we have
m m
1 X i 1 X n1 i o
∥xt − xit+1 ∥2 ⩽ ∥x t − x i
t−1 ∥ 2
− 1
∥x i
t+1 − xi 2
t ∥ + 2θ i i
t η t [fi (x i
t−1 ) − fi (xi
t )] .
m i=1 m i=1 2 2

(36)
Now to bound the last term in (33), we have
m m
D 1 X i E 2 XD ⋆ E
−2 xt − x⋆ , ηt ∇fi (xit ) = ηti x − xit , ∇fi (xit )
m i=1 m i=1
m
2 X i
η fi (x⋆ ) − fi (xit ) ,

⩽ (37)
m i=1 t
where in the equality we used the fact that xit = xt , for k = 1.
Putting (36) and (37) back to (33), we have
m 2 m
1 X i D 1 X i E
∥xt+1 − x⋆ ∥2 = ∥xt − x⋆ ∥2 + η ∇fi (xit ) − 2 xt − x⋆ , ηt ∇fi (xit )
m i=1 t m i=1
m
1 X n1 i o
⩽ ∥xt − x⋆ ∥2 + ∥x − x i
∥2
− 1
∥x i
− x i 2
∥ + 2θ i i
η [f i (x i
) − f (xi
)]
m i=1 2 t t−1 2 t+1 t t t t−1 t

m m
D 1 X i E 1 X i
− 2 xt − xit , ηt ∇fi (xit ) + 2 ηt fi (x⋆ ) − fi (xit ) .

(38)
m i=1 m i=1
1
Pm
Averaging (38) for i = 1, . . . , m, notice that the first term in (38) disappears, as xˆt = m i=1 xi .
m
1 Xh i  1 i
∥xt+1 − x⋆ ∥2 + 2ηt (1 + θti ) fi (xit ) − fi (x⋆ ) + ∥xit+1 − xit ∥2
m i=1 2
m
1 Xh i i  1 i
⩽ ∥xt − x⋆ ∥2 + 2ηt θt fi (xit−1 ) − fi (x⋆ ) + ∥xit − xit−1 ∥2 .
m i=1 2

Theorem 5. Let fi : Rd → R be convex with locally Lipschitz gradients. Then, the sequence {xt }
generated by Algorithm 1, assuming k = 1 and p = 1 with full batch, satisfy the following:
m
1 X 2DL̂2
fi (x̃it ) − fi (x⋆ ) ⩽

, (39)
m i=1 2L̂t + 1
where
Pt−1
η i (1 + θti )xit + αki xik
x̃it = t k=1
,
Sti
αki = ηki (1 + θki ) − ηk+1
i i
θk+1
t−1
X t
X
Sti = ηti (1 + θti ) + αki = ηki + η1i θ1i ,
k=1 k=1

and L̂ is a constant. Further, if x⋆ is any minimum of fi (x) for all i ∈ [m], (39) implies convergence
for problem (1).

21
Published as a conference paper at ICLR 2024

Proof. We start by telescoping (31). Then, we arrive at


m m
1 X i 2 Xh i
∥xt+1 − x⋆ ∥2 + ∥xt+1 − xit ∥2 + ηt (1 + θti ) fi (xit ) − fi (x⋆ )

2m i=1 m i=1
t−1
X i
ηki (1 + θki ) − ηk+1
i i
fi (xik ) − fi (x⋆ )

+ θk+1 (40)
k=1
m m
1 X i 2 X i i
⩽ ∥x1 − x⋆ ∥2 + ∥x1 − xi0 ∥2 + η1 θ1 fi (xi0 ) − fi (x⋆ ) := D. (41)

2m i=1 m i=1
Observe that (40) is nonnegative by the definition of ηti , implying the iterates {xit } are bounded. Now,
define the set S := conv({xi0 , xi1 , . . . }), which is bounded as the convex hull of bounded points.
Therefore, ∇fi is Lipschitz continuous on S. That is, there exist L̂i such that
∥∇fi (x) − ∇fi (y)∥ ⩽ L̂i ∥x − y∥, ∀x, y ∈ S.
Pt−1 i
Also note that (40) is of the form αti [fi (xit ) − fi (x⋆ )] + i ⋆
k=1 αk [fi (xk ) − fi (x )] =
Pt i i ⋆
k=1 αk [fi (xk ) − fi (x )]. The sum of the coefficients equals:
Xt t−1
X t
X
αki = ηti (1 + θti ) + [ηki (1 + θki ) − ηk+1
i i
θk+1 ] = η1i θ1i + ηki := Sti . (42)
k=1 k=1 k=1
Therefore, by Jensen’s inequality, we have
m
1 X i
D ⩾ ∥x1 − x⋆ ∥2 + ∥x1 − xi0 ∥2 + 2Sti fi (x̃it ) − fi (x⋆ ) ⩾ 2Sti fi (x̃it ) − fi (x⋆ ) , (43)
 
2m i=1
Di
which implies fi (x̃it ) − fi (x⋆ ) ⩽ 2Sti
, where
Pt−1
ηti (1 + θti )xit + αki xik
x̃it = k=1
,
Sti
αki = ηki (1 + θki ) − ηk+1
i i
θk+1
t−1
X t
X
Sti = ηti (1 + θti ) + αki = ηki + η1i θ1i . (44)
k=1 k=1
Now, notice that by definition of ηti , we have
∥xit − xit−1 ∥ 1
ηti = i i
⩾ , ∀i ∈ [m].
2∥∇f (xt ) − ∇f (xt−1 )∥ 2L̂i
(η1i )2
Also, notice that η1i θ1i = η0i
= (η1i )2 , assuming for simplicity that η0i = 1. Thus, going back to
(44), we have
t t
X X 1 1
Sti = ηki + η1i θ1i ⩾ +
k=1 k=1
2L̂i 4L̂2i
t 1 t 1
= + ⩾ + ,
2L̂i 4L̂2i 2L̂ 4L̂2
where L̂ = maxi L̂i . Finally, we have
m
D 1 X i
St fi (x̃it ) − fi (x⋆ )


2 m i=1
m
1 X t 1 
fi (x̃it ) − fi (x⋆ ) ,

⩾ +
m i=1 2L̂ 4L̂2
which implies
m
2DL̂2 1 X
fi (x̃it ) − fi (x⋆ ) .


2L̂t + 1 m i=1

22
Published as a conference paper at ICLR 2024

B A DDITIONAL E XPERIMENTS AND P LOTS

B.1 E ASE OF TUNING

In this subsection, we show the effect of using different parameter δ in the step size of ∆-SGD. As
mentioned in Section 4, for ∆-SGD, we append δ in front of the second condition, and hence the step
size becomes

γ∥xit,k −xit,k−1 ∥
n q o
i i i
ηt,k = min ˜ i (xi )−∇f
2∥∇f ˜ i (xi )∥
, 1 + δθt,k−1 ηt,k−1 .
t,k t,k−1

The experimental results presented in Table 1 were obtained using a value of δ = 0.1. For γ, we
remind the readers that we did not change from the default value γ = 2 in the original implementa-
tion in https://github.com/ymalitsky/adaptive_GD/blob/master/pytorch/
optimizer.py.
In Figure 4, we display the final accuracy achieved when using different values of δ from the set
{0.01, 0.1, 1}. Interestingly, we observe that δ has very little impact on the final accuracy. This
suggests that ∆-SGD is remarkably robust and does not require much tuning, while consistently
achieving higher test accuracy compared to other methods in the majority of cases as shown in
Table 1.

Figure 4: Effect of using different δ in the second condition of the step size of ∆-SGD.

B.2 E FFECT OF DIFFERENT NUMBER OF LOCAL ITERATIONS

In this subsection, we show the effect of the different number of local epochs, while fixing the
experimental setup to be the classification of CIFAR-100 dataset, trained with a ResNet-50. We show
for all three cases of the Dirichlet concentration parameter α ∈ {0.01, 0.1, 1}. The result is shown in
Figure 5.
As mentioned in the main text, instead of performing local iterations, we instead perform local epochs
E, similarly to (Reddi et al., 2021; Li et al., 2020). All the results in Table 1 use E = 1, and in
Figure 5, we show the results for E ∈ {1, 2, 3}. Note that in terms of E, the considered numbers
might be small difference, but in terms of actual local gradient steps, they differ significantly.
As can be seen, using higher E leads to slightly faster convergence in all cases of α. However, the
speed-up seems marginal compared to the amount of extra computation time it requires (i.e., using
E = 2 takes roughly twice more total wall-clock time than using E = 1). Put differently, ∆-SGD
performs well with only using a single local epoch per client (E = 1), and still can achieve great
performance.

23
Published as a conference paper at ICLR 2024

Figure 5: Effect of the different number of local epochs.

B.3 A DDITIONAL EXPERIMENTS USING DIFFERENT NUMBER OF LOCAL DATA PER CLIENT

Non-iidness Optimizer Dataset / Model


CIFAR-10 CIFAR-100
Dir(α · p)
ResNet-18 ResNet-50
SGD 80.7↓(0.0) 53.5↓(4.0)
SGD (↓) 78.8↓(1.9) 53.6↓(3.9)
SGDM 75.0↓(5.7) 53.9↓(3.6)
α = 0.1 SGDM (↓) 66.6↓(14.1) 53.1↓(4.4)
Adam 79.9↓(0.8) 51.1↓(6.4)
Adagrad 79.3↓(1.4) 44.5↓(13.0)
SPS 64.4↓(16.3) 37.2↓(20.3)
∆-SGD 80.4↓(0.3) 57.5↓(0.0)

Table 3: Experimental results using different number of local data per client. Performance difference
within 0.5% of the best result for each task are shown in bold. Subscripts↓(x.x) is the performance
difference from the best result and is highlighted in pink when the difference is bigger than 2%. The
symbol (↓) appended to SGD and SGDM indicates step-wise learning rate decay, where the step sizes
are divided by 10 after 50%, and another by 10 after 75% of the total rounds.

In this subsection, we provide additional experimental results, where the size of the local dataset
each client has differs. In particular, instead of distributing 500 samples to each client as done in
Section 4 of the main text, we distribute ni ∈ [100, 500] samples for each client i, where ni is the
number of samples that client i has, and is randomly sampled from the range [100, 500]. We keep all
other settings the same.
Then, in the server aggregation step (line 13 of Algorithm 1), we compute the weighted average of
the local parameters, instead of the plain average, as suggested in (McMahan et al., 2017), and used
for instance in (Reddi et al., 2021).
We experiment in two settings: CIFAR-10 dataset classification trained with a ResNet-18 (He et al.,
2016), with Dirichlet concentration parameter α = 0.1, which was the setting where the algorithms
were fined-tuned using grid search; please see details in Section 4. We also provide results for
CIFAR-100 dataset classification trained with a ResNet-50, with the same α.
The result can be found in Table 3. Similarly to the case with same number of data per client,
i.e., ni = 500 for all i ∈ [m], we see that ∆-SGD achieves good performance without any additional
tuning. In particular, for CIFAR-10 classification trained with a ResNet-18, ∆-SGD achieves the
second best performance after SGD (without LR decay), with a small margin. Interestingly, SGDM
(both with and without LR decay) perform much worse than SGD. This contrasts with the same
setting in Table 1, where SGDM performed better than SGD, which again complicates how one
should tune client optimizers in the FL setting. For CIFAR-100 classification trained with ResNet-50,
∆-SGD outperforms all other methods with large margin.

24
Published as a conference paper at ICLR 2024

B.4 A DDITIONAL EXPERIMENTS USING F EDA DAM

As mentioned in Section 2 of the main text, since our work focuses on client adaptivity, our method
can seamlessly be combined with server-side adaptivity like FedAdam (Reddi et al., 2021). Here, we
provide additional experiments using the FedAdam server optimizer.
A big downside of the adaptive server-side approach like FedAdam is that it adds at least one
more hyperparameter to tune: the step size for the global optimizer, set aside the two “momentum”
parameters (β1 and β2 ) in the case of Adam. This necessity to tune additional parameters is quite
orthogonal to the approach we take, where we try to remove the need for step size tuning.
Indeed, as can be seen in Reddi et al. (2021)[Appendix D.2], 9 different server-side global optimizer
step size is grid-searched, and Reddi et al. (2021)[Table 8] shows the best performing server learning
rate differs for each task.
That being said, we simply used the default step size and momentum values of Adam (from
Pytorch), to showcase that our method can seamlessly combined with FedAdam. Due to time
constraints, we tested up to CIFAR-100 trained with a ResNet-18, so that all datasets are covered.
The result can be found below.
For three out of four cases, ∆-SGD equipped with Adam server-side optimizer achieves the best
accuracy with large margin. For the last case, CIFAR-100 trained with a ResNet-18, SGD with
decaying step size achieves noticeably better accuracy than ∆-SGD; however, we refer the readers to
Table 1 where for the same setting, ∆-SGD equipped with simple averaging achieves 61.1% accuracy,
without any tuning.

Additional experiments using FedAdam (Reddi et al., 2021)


Non-iidness Optimizer Dataset / Model
MNIST FMNIST CIFAR-10 CIFAR-100
Dir(α · p)
CNN CNN ResNet-18 ResNet-18
SGD 97.3↓(0.6) 83.7↓(2.1) 52.0↓(11.8) 46.7↓(2.5)
SGD (↓) 96.4↓(1.4) 80.9↓(4.9) 49.1↓(14.7) 49.2↓(0.0)
SGDM 97.5↓(0.4) 84.6↓(1.2) 53.7↓(10.1) 13.3↓(35.9)
SGDM (↓) 96.4↓(1.5) 81.8↓(4.0) 53.3↓(10.5) 16.8↓(32.4)
α = 0.1
Adam 96.4↓(1.5) 81.5↓(4.3) 27.8↓(36.0) 38.3↓(10.9)
Adagrad 95.7↓(2.2) 82.1↓(3.7) 10.4↓(53.4) 1.0↓(48.2)
SPS 96.6↓(1.3) 85.0↓(0.8) 21.6↓(42.2) 1.6↓(47.6)
∆-SGD 97.9↓(0.0) 85.8↓(0.0) 63.8↓(0.0) 41.9↓(7.3)

Table 4: Additional experiments using FedAdam (Reddi et al., 2021).

B.5 A DDITIONAL EXPERIMENTS USING MOON

Here, we provide additional experimental results using MOON (Li et al., 2021), which utilizes
model-contrastive learning to handle heterogeneous data. In this sense, the goal of MOON is very
similar to that of FedProx (Li et al., 2020); yet, due to the model-contrastive learning nature, it can
incur bigger memory overhead compared to FedProx, as one needs to keep track of the previous
model(s) to compute the representation of each local batch from the local model of last round, as
well as the global model.
As such, we only ran using MOON for CIFAR-10 classification using a ResNet-18; for the CIFAR-
100 classification using a ResNet-50, our computing resource ran out of memory to run the same
configuration we ran in the main text.
The results are summarized in Table 5. As can be seen, even ignoring the fact that MOON utilizes
more memory, MOON does not help for most of the optimizers–it actually often results in worse final
accuracies than the corresponding results in Table 1 from the main text.

25
Published as a conference paper at ICLR 2024

Non-iidness Optimizer Dataset / Model


CIFAR-10
Dir(α · p)
ResNet-18
SGD 78.2↓(4.9)
SGD (↓) 74.2↓(8.9)
SGDM 76.4↓(6.7)
SGDM (↓) 75.5↓(14.1)
α = 0.1
Adam 82.4↓(0.6)
Adagrad 81.3↓(1.8)
SPS 9.57↓(73.5)
∆-SGD 83.1↓(0.0)

Table 5: Experimental results using MOON loss function. Performance differences within 0.5% of
the best result for each task are shown in bold. Subscripts↓(x.x) is the performance difference from
the best result and is highlighted in pink when the difference is bigger than 2%. The symbol (↓)
appended to SGD and SGDM indicates step-wise learning rate decay, where the step sizes are divided
by 10 after 50%, and another by 10 after 75% of the total rounds.

B.6 A DDITIONAL EXPERIMENTS USING F ED P ROX

Here, we provide the complete result of Table 2b, conducting the experiment using FedProx loss
function for the most heterogeneous case (α = 0.01) for all datasets and architecture considered.
The result can be found below, where ∆-SGD achieves the best accuracy in three out of five cases
(FMNIST trained with a CNN, CIFAR-10 trained with a ResNet-18, and CIFAR-100 with ResNet-50);
in the remaining two cases (MNIST trained with a CNN and CIFAR-100 trained with a ResNet-18),
∆- SGD achieves second-best accuracy with close margin to the best.

Additional experiments using FedProx loss function (Li et al., 2020)


Non-iidness Optimizer Dataset / Model
MNIST FMNIST CIFAR-10 CIFAR-100 CIFAR-100
Dir(α · p)
CNN CNN ResNet-18 ResNet-18 ResNet-50
SGD 95.7↓(1.4) 79.0↓(1.6) 20.0↓(13.8) 30.3↓(0.0) 25.2↓(5.9)
SGD (↓) 97.2↓(0.0) 79.3↓(1.3) 31.3↓(2.5) 29.5↓(0.8) 20.2↓(10.8)
SGDM 73.8↓(23.4) 72.8↓(7.8) 29.3↓(4.4) 22.9↓(7.5) 23.8↓(7.2)
SGDM (↓) 81.2↓(16.0) 78.0↓(2.6) 25.3↓(8.5) 19.8↓(10.5) 15.0↓(16.0)
α = 0.01
Adam 82.3↓(14.8) 65.6↓(15.0) 28.1↓(5.7) 24.8↓(5.6) 22.6↓(8.4)
Adagrad 76.3↓(20.9) 51.2↓(29.4) 19.3↓(14.5) 12.9↓(17.4) 4.1↓(26.9)
SPS 85.5↓(11.7) 62.1↓(18.45) 27.6↓(6.2) 21.3↓(9.0) 16.5↓(14.5)
∆-SGD 96.9↓(0.26) 80.6↓(0.0) 33.8↓(0.0) 29.7↓(0.6) 31.0↓(0.0)

Table 6: Additional experiments using FedProx loss function.

B.7 A DDITIONAL PLOT WITH STANDARD DEVIATION (3 RANDOM SEEDS )

In this section, we repeat for three independent trials (using three random seeds) for two tasks:
CIFAR-10 classification trained with a ResNet-18, and FMNIST classification trained with a shallow
CNN, both with Dirichlet α = 0.1,. The aim is to study the robustness of ∆-SGD in comparison
with other client optimizers.
We remind the readers that CIFAR-10 classification trained with a ResNet-18 with Dirichlet α = 0.1
is where we performed grid-search for each client optimizer, and thus each are using the best step
size that we tried. Then, for FMNIST classification, the same step sizes from the previous task are
used. The result can be found below.

26
Published as a conference paper at ICLR 2024

We see that ∆-SGD not only achieves the best average test accuracy, but also achieves very small
standard deviation. Critically, none of the other client optimizers achieve good performance in both
settings, other than the exception of ∆-SGD.

Figure 6: Additional plots for CIRAR-10 classification trained with a ResNet-18, FMNIST classifica-
tion trained with a CNN, and CIFAR-100 classification trained with a ResNet-50, repeated for three
independent trials. The average is plotted, with shaded area indicating the standard deviation.

B.8 S TEP SIZE PLOT FOR ∆-SGD

Here, we visualize the step size conditions for ∆-SGD to see how the proposed step size looks like in
practice. We only plot the first 300 epochs, as otherwise the plot gets quite messy.
i
From Figure 7, we can see that both conditions for ηt,k are indeed necessary. The first condition,
plotted in green, approximates the local smoothness of fi , but can get quite oscillatory. The second
condition, plotted in blue, effectively restricts the first condition from taking too large values.

27
Published as a conference paper at ICLR 2024

Non-iidness Optimizer Dataset / Model


Dir(α · p) CIFAR-10/ResNet-18 FMNIST/CNN CIFAR-100/ResNet-50
Average Std. Dev. Average Std. Dev. Average Std. Dev.
SGD 73.96↓(9.93) 0.0263 83.43↓(1.78) 0.0178 49.44↓(14.58) 0.0376
SGD (↓) 67.79↓(16.10) 0.1550 77.83↓(7.38) 0.0815 40.26↓(23.76) 0.1687
SGDM 77.59↓(6.30) 0.0142 83.27↓(1.94) 0.0186 53.80↓(10.22) 0.0149
SGDM (↓) 72.11↓(11.78) 0.0611 81.36↓(3.85) 0.0606 46.77↓(17.25) 0.0762
α = 0.1
Adam 83.24↓(0.65) 0.0126 79.98↓(5.23) 0.0228 58.89↓(5.13) 0.0098
Adagrad 83.86↓(0.03) 0.0048 53.06↓(32.15) 0.0487 49.93↓(14.09) 0.0177
SPS 68.38↓(15.51) 0.0192 84.59↓(0.62) 0.0154 43.16↓(20.86) 0.0072
∆-SGD 83.89↓(0.00) 0.0052 85.21↓(0.00) 0.0152 64.02↓(0.00) 0.0051

Table 7: CIRAR-10 classification trained with a ResNet-18 and FMNIST classification trained with a
CNN, and CIFAR-100 classification trained with a ResNet-50, repeated for three independent trials.
The average among three trials and the standard deviation for each client optimizer are reported.

Figure 7: Step size conditions for ∆-SGD. In green, we plot the first condition, and in blue, we plot
i
the second condition for ηt,k , which is plotted in red color taking the minimum of the two conditions.

B.9 I LLUSTRATION OF THE DEGREE OF HETEROGENEITY

As mentioned in the main text, the level of “non-iidness” is controlled using latent Dirichlet allocation
(LDA) applied to the labels. This approach is based on (Hsu et al., 2019) and involves assigning each
client a multinomial distribution over the labels. This distribution determines the labels from which
the client’s training examples are drawn. Specifically, the local training examples for each client are
sampled from a categorical distribution over N classes, parameterized by a vector q. The vector q is
drawn from a Dirichlet distribution with a concentration parameter α times a prior distribution vector
p over N classes.
To vary the level of “non-iidness,” different values of α are considered: 0.01, 0.1, and 1. A larger
α corresponds to settings that are closer to i.i.d. scenarios, while smaller values introduce more
diversity and non-iidness among the clients, as visualized in Figure 8. As can be seen, for α = 1,
each client (each row on y-axis) have fairly diverse classes, similarly to the i.i.d. case; as we decrease
α, the number of classes that each client has access to decreases. With α = 0.01, there are man y
clients who only have access to a single or a couple classes.

28
Published as a conference paper at ICLR 2024

Figure 8: Illustration of the degree of heterogeneity induced by using different concentration pa-
rameter α for Dirichlet distribution, for CIFAR-10 dataset (10 colors) and 100 clients (100 rows on
y-axis).

29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy