Offline Reinforcement Learning:
Fundamental Barriers for Value Function Approximation
Abstract
We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data. Offline RL—particularly when coupled with (value) function approximation to allow for generalization in large or continuous state spaces—is becoming increasingly relevant in practice, because it avoids costly and timeconsuming online data collection and is well suited to safetycritical domains. Existing sample complexity guarantees for offline value function approximation methods typically require both (1) distributional assumptions (i.e., good coverage) and (2) representational assumptions (i.e., ability to represent some or all value functions) stronger than what is required for supervised learning. However, the necessity of these conditions and the fundamental limits of offline RL are not well understood in spite of decades of research. This led Chen and Jiang (2019) to conjecture that concentrability (the most standard notion of coverage) and realizability (the weakest representation condition) alone are not sufficient for sampleefficient offline RL. We resolve this conjecture in the positive by proving that in general, even if both concentrability and realizability are satisfied, any algorithm requires sample complexity polynomial in the size of the state space to learn a nontrivial policy.
Our results show that sampleefficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond supervised learning, and highlight a phenomenon called overcoverage which serves as a fundamental barrier for offline value function approximation methods. A consequence of our results for reinforcement learning with linear function approximation is that the separation between online and offline RL can be arbitrarily large, even in constant dimension.
Proof.
\proofnameformat1 Introduction
In offline reinforcement learning, we aim to evaluate or optimize decision making policies using logged transitions and rewards from historical experiments or expert demonstrations. Offline RL has great promise for decision making applications where actively acquiring data is expensive or cumbersome (e.g., robotics (Pinto and Gupta, 2016; Levine et al., 2018; Kalashnikov et al., 2018)), or where safety is critical (e.g., autonomous driving (Sallab et al., 2017; Kendall et al., 2019) and healthcare (Gottesman et al., 2018, 2019; Wang et al., 2018; Yu et al., 2019; Nie et al., 2021)). In particular, there is substantial interest in combining offline reinforcement learning with function approximation (e.g., deep neural networks) in order to encode inductive biases and enable generalization across large, potentially continuous state spaces, with recent progress on both modelfree and modelbased approaches (Ross and Bagnell, 2012; Laroche et al., 2019; Fujimoto et al., 2019; Kumar et al., 2019; Agarwal et al., 2020). However, existing algorithms are extremely dataintensive, and offline RL methods—to date—have seen limited deployment in the aforementioned applications. To enable practical deployment going forward, it is paramount that we develop a strong understanding of the statistical foundations for reliable, sampleefficient offline reinforcement learning with function approximation, as well as an understanding of when and why existing methods succeed and how to effectively collect data.
Compared to the basic supervised learning problem, offline reinforcement learning with function approximation poses substantial algorithmic challenges due to two issues: distribution shift and credit assignment. Within the literature on value function approximation (or, approximate dynamic programming), all existing methods require both (1) distributional conditions, which assert that the logged data has good coverage (addressing distribution shift), and (2) representational conditions, which assert that the function approximator is flexible enough to represent value functions induced by certain policies (addressing credit assignment). Notably, sample complexity analyses for standard offline RL methods (e.g., fitted Qiteration) require representation conditions considerably more restrictive than what is required for classical supervised learning (Munos, 2003, 2007; Munos and Szepesvári, 2008; Antos et al., 2008), and these methods are known to diverge when these conditions do not hold (Gordon, 1995; Tsitsiklis and Van Roy, 1996, 1997; Wang et al., 2021a). Despite substantial research effort, it is not known whether these conditions constitute fundamental limits, or whether the algorithms can be improved. Resolving this issue would serve as a stepping stone toward developing a theory for offline reinforcement learning that parallels our understanding of supervised (statistical) learning.
The lack of understanding of fundamental limits in offline reinforcement learning was highlighted by Chen and Jiang (2019), who observed that all existing finitesample analyses for offline RL algorithms based on concentrability (Munos, 2003)—the most ubiquitous notion of data coverage—require representation conditions significantly stronger than realizability, a standard condition from supervised learning which asserts that the function approximator can represent optimal value functions. Chen and Jiang (2019) conjectured that realizability and concentrability alone do not suffice for sampleefficient offline RL, and noted that proving such a result seemed to be out of reach for existing lower bound techniques. Subsequent progress led to positive results for sampleefficient offline RL under coverage conditions stronger than concentrability (Xie and Jiang, 2021) and impossibility results under weaker coverage conditions (Wang et al., 2020; Zanette, 2021), but the original conjecture remained open.
Contributions
We resolve the conjecture of Chen and Jiang (2019) in the positive. We provide an informationtheoretic lower bound which shows that in general, even if both concentrability and realizability are satisfied, any algorithm requires sample complexity polynomial in the size of the state space to learn a near optimal policy. This establishes that sampleefficient offline reinforcement learning in large state spaces is not possible unless more stringent conditions—either distributional or representational—hold.
Our lower bound construction is qualitatively different from previous approaches and holds even when the effective horizon and number of actions are constant and the value function class has constant size. The lower bound highlights the role of a phenomenon we call overcoverage (first documented by Xie and Jiang (2021)), wherein the data collection distribution is supported over spurious states not reachable by any policy which—despite being irrelevant for learning in the online setting—create significant uncertainty. Our work shows that the overcoverage phenomenon is a fundamental, informationtheoretic barrier for design of offline reinforcement learning algorithms.
1.1 Offline Reinforcement Learning Setting
Markov decision processes
We consider the infinitehorizon discounted reinforcement learning setting. Formally, a Markov decision process consists of a (potentially large/continuous) state space , action space , probability transition function , reward function , discount factor , and initial state distribution . Each (randomized) policy induces a distribution over trajectories via the following process. For , , and , with . We let and denote expectation and probability under this process, respectively.
The expected return for policy is defined as , and the value function and function for are given by
It is wellknown that there exists a deterministic policy that maximizes for all simultaneously, and thus also maximizes ; Letting and , we have for all . Finally, we define the occupancy measure for policy via . We drop the dependence on the model when it is clear from context.
Offline policy learning
In the offline policy learning (or, optimization) problem, we do not have direct access to the underlying MDP and instead receive a dataset of tuples with , , and i.i.d., where is the data collection distribution. The goal of the learner is to use the dataset to learn an optimal policy , that is:
where the expectation is over the draw of and any randomness used by the algorithm.
In order to provide sampleefficient learning guarantees that do not depend on the size of the state space, value function approximation methods take advantage of the following conditions.

Realizability. This condition asserts that we have access to a class of candidate value functions (e.g., linear models or neural networks) such that . Realizability (that is, a wellspecified model) is the most common representation condition in supervised learning and statistical estimation (e.g., Bousquet et al. (2003); Wainwright (2019) and is also widely used in contextual bandits (Agarwal et al., 2012; Foster et al., 2018).

Concentrability. Call a distribution admissible for the MDP if there exists a (potentially stochastic and nonstationary) policy and index such that . This condition asserts that there exists a constant such that for all admissible ,
(1) Concentrability is a simple but fairly strong notion of coverage which demands that the data distribution uniformly covers all states.
Under these conditions, an offline RL algorithm is said to be sampleefficient if it learns an optimal policy with samples. Notably, such a guarantee depends only on the complexity for the value function class, not on the size of the state space.^{1}^{1}1For infinite function classes (), one can replace with other standard measures of statistical capacity, such as Rademacher complexity or metric entropy. For example, when is a class of dimensional linear functions, can be replaced by the dimension , which is an upper bound on the metric entropy.
Are realizability and concentrability sufficient?
While realizability and concentrability are appealing in their simplicity, these assumptions alone are not known to suffice for sampleefficient offline RL. The most wellknown line of research (Munos, 2003, 2007; Munos and Szepesvári, 2008; Antos et al., 2008; Chen and Jiang, 2019) analyzes offline RL methods such as fitted Qiteration under the stronger representation condition that is closed under Bellman updates (‘‘completeness’’),^{2}^{2}2Precisely, , where is the Bellman operator defined by . and obtains sample complexity. Completeness is a widely used assumption, but is substantially more restrictive than realizability and can be violated by adding a single function to . Subsequent years have seen extensive research into algorithmic improvements and alternative representation and coverage conditions, but the question of whether realizability and concentrability alone are sufficient remained open.
1.2 Main Result
Our main result is an informationtheoretic lower bound which shows that realizability and concentrability are not sufficient for sampleefficient offline RL.
Theorem 1 (Main theorem).
For any and , there exists a family of MDPs with and , a value function class with , and a data distribution such that:

We have (realizability) and (concentrability) for all models in .

Any algorithm using less than samples must have for some instance in , where and are absolute numerical constants.
Theorem 1 shows that even though realizability and concentrability are satisfied, any algorithm requires at least samples to learn a nearoptimal policy. Since can be arbitrarily large, this shows that sampleefficient offline RL in large state spaces is impossible without stronger representation or coverage conditions.
We next state a variant of Theorem 1 which goes further and shows that even if one replaces realizability () by a substantially stronger representation condition—allpolicy realizability—which requires that for every policy rather than just for , sampleefficient offline RL is still impossible. This result is established using the same family of MDPs and data distribution as in Theorem 1. The only tradeoff is that creftype 1 requires a slightly larger value function class .
Theorem 1 (Variant of Theorem 1).
Let and be given. For the family of MDPs and the data distribution described in Theorem 1, there exists a threedimensional linear function class
where is a known feature map with , such that:

We have for all (allpolicy realizability) and (concentrability) for all models in .

Any algorithm using less than samples must have for some instance in , where and are absolute numerical constants.
We now discuss some consequences and features of our results.

Our lower bound critically relies on the notion of overcoverage, where the data distribution contains states not visited by any admissible policy.^{3}^{3}3Note that while the states may not be reachable for a given MDP in the family , all states are reachable for some MDP in the family. The issue of overcoverage was first noted by Xie and Jiang (2021), who observed that it can lead to pathological behavior in certain offline RL algorithms. Our result shows—somewhat surprisingly—that this phenomenon is a fundamental barrier which applies to any value approximation method. In particular, we show that overcoverage causes spurious correlations across reachable and unreachable states which leads to significant uncertainty in the dynamics when the number of states is large.
It remains to be seen whether the overcoverage phenomenon represents a serious barrier in practice, or whether it is simply an indication that more care needs to be taken in mathematically formulating the offline RL problem. On one hand, given a simulator, it is certainly possible to sample stateaction pairs that are not reachable by any policy. Furthermore, for the family of instances in our construction, any data distribution with nontrivial concentrability must satisfy overcoverage unless the learner has prior knowledge of the specific instance under consideration. On the other hand, in practice one often has additional knowledge about the underlying MDP in the form of policyinduced data, and it is not clear whether our lower bounds can be made to hold in this setting.

On the technical side, compared to recent lower bounds which establish hardness of offline RL under coverage conditions weaker than concentrability (Wang et al., 2020; Zanette, 2021), our lower bounds have a less geometric, more informationtheoretic flavor, and share more in common with lower bounds for sparsity and support testing in statistical estimation (Paninski, 2008; Verzelen and Villers, 2010; Verzelen and Gassiat, 2018; Canonne, 2020). Whereas previous lower bounds show that weak coverage conditions can lead to undercoverage and consequently failure of concentrability, our lower bound shows that overcoverage is an issue even when concentrability is satisfied.
Another interesting feature is that while previous lower bounds (Wang et al., 2020; Zanette, 2021) are based on deterministic MDPs, our construction critically uses stochastic dynamics. This departure is necessary, as the Bellman error minimization algorithm in Chen and Jiang (2019) succeeds under concentrability and realizability if the dynamics are deterministic.^{4}^{4}4Deterministic dynamics allow one to avoid the wellknown double sampling problem and in particular cause the conditional variance in Eq. (3) of Chen and Jiang (2019) to vanish. Compared to Wang et al. (2020), who work with a relatively small state space but large horizon and feature dimension, we keep the horizon constant and instead grow the state space, leading to polynomial dependence on in the lower bound.

The lower bound has constant suboptimality gap for , which rules out gapdependent regret bounds as a path toward sampleefficient offline reinforcement learning.
While our lower bound focuses on policy optimization and infinitehorizon RL for concreteness, it readily extends to the finitehorizon setting (in fact, with ), and provides the first impossibility result for offline RL with constant horizon that we are aware of. The lower bound also extends to policy evaluation; see Section 2.4 for discussion.
1.3 Related Work
We close this section with a detailed discussion of some of the most relevant related work.
Lower bounds
While algorithmspecific counterexamples for offline reinforcement learning algorithms have a long history (Gordon, 1995; Tsitsiklis and Van Roy, 1996, 1997; Wang et al., 2021a), informationtheoretic lower bounds are a more recent subject of investigation. Wang et al. (2020) (see also Amortila et al. (2020)) consider the setting where is linear (i.e., , where is a known feature map). They consider a weaker coverage condition tailored to the linear setting, which asserts that
In more detail, Wang et al. (2020) construct a family of MDPs and a data distribution that satisfy realizability and the feature coverage condition above, but not concentrability, and use this family to prove an exponential sample complexity lower bound. However, for the family of MDPs under consideration, one can obtain polynomial sample complexity for any data distribution satisfying concentrability, simply because all MDPs in the family have deterministic dynamics. Hence, this construction cannot be used to establish impossibility of sampleefficient learning under concentrability and realizability. Instead, the conceptual takeaway is that the feature coverage condition can lead to undercoverage, and may not be the right assumption for offline RL. This point is further highlighted by Amortila et al. (2020) who show that in the infinitehorizon setting, the feature coverage condition can lead to nonidentifiability in MDPs with only two states, meaning one cannot learn an optimal policy even with infinitely many samples. Concentrability places stronger restrictions on the data distribution and underlying dynamics, and always implies identifiability when the state and action space are finite.
Further work in this direction includes (1) Zanette (2021), who provides a slightly more general lower bound for linear realizability, and (2) lower bounds for online reinforcement learning with linear realizability (Du et al., 2020; Weisz et al., 2021; Wang et al., 2021b). It is worth noting that Zanette (2021) provides a lower bound with a policyinduced data distribution, where overcoverage cannot occur; however concentrability is not satisfied by his construction.
Upper bounds
Classical analyses for offline reinforcement learning algorithms such as FQI (Munos, 2003, 2007; Munos and Szepesvári, 2008; Antos et al., 2008) provide sample complexity upper bounds in terms of concentrability under the strong representation condition of Bellman completeness. The pathbreaking recent work of Xie and Jiang (2021) provides an algorithm which requires only realizability, but uses a stronger coverage condition (“pushforward concentrability”) which requires that for all . Our results imply that this condition cannot be substantially relaxed.
A complementary line of work (Uehara et al., 2020; Xie and Jiang, 2020; Jiang and Huang, 2020; Uehara et al., 2021) provides upper bounds that require only concentrability and realizability, but assume access to an additional weight function class that is flexible enough to represent various occupancy measures for the underlying MDP. These results scale with the complexity of the weight function class. In general, the complexity of this class may be prohibitively large without prior knowledge; this is witnessed by our lower bound construction.
1.4 Preliminaries
For any , let . For an integer , we let denote the set . For a finite set , denotes the uniform distribution over , and denotes the set of all probability distributions over . For probability distributions and over a measurable space with a common dominating measure, we define the total variation distance as and define the divergence as
2 Fundamental Barriers for Offline Reinforcement Learning
In this section we present our lower bound construction, prove Theorem 1, and discuss consequences and extensions, including creftype 1.
2.1 Construction: MDP Family, Value Functions, and Data Distribution
We first provide our lower bound construction, which entails specifying the MDP family , the value function class , and the data distribution ; recall that and are parameters for the construction.
All MDPs in belong to a parameterized MDP family with shared transition and reward structure. In what follows, we first describe the structure of the parameterized family (Section 2.1.1) and give intuition behind why this structure leads to statistical hardness (Section 2.1.2). We then provide a specific collection of parameters that gives rise to the hard family (Section 2.1.3) and finally complete the construction by specifying the value function class and data distribution (Section 2.1.4).
2.1.1 MDP Parameterization
Let the discount factor be fixed, and let be given. Assume without loss of generality that and that is an integer. We consider the parameterized MDP family illustrated in Fig. 1. Each MDP takes the form , and is parametrized by a subset of states , two probability parameters , and a reward vector . All MDPs in the family share the same state space , action space , discount factor , and initial state distribution , and differ only in terms of the transition function and the reward function .
State space
We consider a layered state space with , where contains a single initial state which occurs at , is a collection of firstlayer states which occur at , where , and contains four selflooping terminal states, which appear for all .
Action space
Our action space is given by . For states in , the two actions have distinct effects, while for states in both actions have identical effects. As a result, the value of a given policy only depends on the actions it selects at layer . For the sake of compactness, we use the symbol as a placeholder to denote either action when taken in , since the choice is immaterial.^{5}^{5}5It is conceptually simpler to consider a construction where only a single action is available in and , but this is notationally more cumbersome.
Transition operator
For an MDP , we let parameterize a subset of the firstlayer states. We call each a planted state and an unplanted state. The dynamics for are determined by and the parameters as follows (cf. Fig. 1):

Layer 0 to 1 (). We define . That is, from the initial state (and taking either action), the MDP transitions to each planted state with equal probability; unplanted states are not reachable.

Layer 2 onwards (). All states in selfloop indefinitely. That is .
Reward function
States in layer 0 and layer 1 have no reward, i.e., . Starting from layer 2 (i.e. ), each of the selflooping terminal states has a fixed reward determined by the parameter . In particular, we define , , , and (recall that describes either action in these states).
Initial state distribution
All MDPs in start at deterministically (that is, the initial state distribution puts all the probability mass on ). Note that since does not vary between instances, it may be thought of as known to the learning algorithm.
2.1.2 Intuition Behind the Construction
The family of MDPs that witnesses our lower bound is a subset of the collection . Before specifying the family precisely, we give intuition as to why this MDP structure leads to statistical hardness for offline reinforcement learning. The hardness arises as a result of two general principles, planted subset structure and overcoverage
Planted Subset Structure
We use the planted subset structure specified by as a mechanism to make the state space size appear in the sample complexity lower bound. For exposition, Fig. 1 highlights only the parts of the MDP that are relevant to understanding the role of the planted subset structure.
Recall that for the firstlayer states , the planted subset has the property that all states in (planted states) share the same transition rule, and all states in (unplanted states) share a different transition rule. The choice of the planted subset is combinatorial in nature (for example, the number of all planted subsets of size is , which is exponential in ), which is the basis for the statistical hardness of our construction. Consider Fig. 1, which has transitions from for action illustrated in red. Each planted state in (grey) transitions to and with probability and respectively, while each unplanted state in (striped) transitions to and with probability and respectively. Suppose we are given a batch dataset of independent examples in which a state in is selected uniformly at random, and we observe a sample from the next state distribution when action is taken; we focus on action because action reveals no information about the underlying MDP. One can show that basic statistical inference tasks such as estimating the size of the planted subset require sample complexity, as this entails detecting the subset based on data generated from a mixture of planted and unplanted states. For example, it is well known that testing if a distribution is uniform on a set with versus uniform on all of requires samples (see e.g., Paninski, 2008; Ingster and Suslina, 2012; Canonne, 2020, Section 5.1).
Building on this hardness, one can also show any algorithm requires at least samples to reliably estimate the transition probability if and are unknown. Intuitively, this arises because the only way to escape from the sample complexity of estimating as a means to estimate is to directly look at the marginal distribution over . However, the marginal distribution is uninformative for estimating when there is uncertainty about and . For example, the marginal probability of transitioning to is , from which cannot be directly recovered if is unknown.
The takeaway is that while estimating would be trivial if the dataset only consisted of transitions generated from planted states, estimating this parameter when states are drawn uniformly from is very difficult because an unknown subset comes from unplanted states. This is relevant because—as we will show—one can choose a collection of values for the parameters , , such that any nearoptimal policy learning algorithm can be used to recover the parameter value.
Another important feature of the planted subset structure is that different choices of lead to the same function, as long as we ensure that all states in have the same value. This allows us to construct a large number of MDPs (exponential in ) for which the function is one of two candidates (i.e., ). However, it remains to show that the hardness described above can be embedded in the offline RL setting, since (i) we must ensure concentrability is satisfied, and (ii) the learner observes rewards, not just transitions.
Overcoverage
Returning to Fig. 1, we observe that the transitions from the initial state are such that all planted states in are reachable, but the unplanted states in are not reachable by any policy. In particular, since all unplanted states are unreachable, any state that can only be reached from unplanted states is also unreachable, and hence we can achieve concentrability Eq. 1 without covering such states. This allows us to choose the data distribution to be (roughly) uniform over all states except for the unreachable state . This choice satisfies concentrability, but renders all reward observations uninformative (cf. Section 2.1.1). As a consequence, we show that the task described in Section 2.1.2, i.e., detecting based on transition data unavoidable for any algorithm with nontrivial offline RL performance.
The key principle at play here is overcoverage. Per the discussion above, we know that if the data distribution happened to only be supported over reachable states for a given MDP, all layer1 examples in would have , which would make estimating trivial. Our construction for is uniform over all states in , and hence satisfies overcoverage, since it is supported over a mix of planted states and spurious (unplanted) states not reachable by any policy. This makes estimating challenging because—due to correlations between planted and unplanted states—no algorithm can accurately estimate transitions or recover the planted states until the number of samples scales with the number of states.
2.1.3 Specifying the MDP Family
Using the parameterized MDP family , we construct the hard family for our lower bound by selecting a specific collection of values for the parameters .
Define for all such that is an integer. We define two subfamilies of MDPs,
where is specified by and , and is specified by and .^{6}^{6}6Recall that we assume without loss of generality that is an integer. Finally, we define the hard family via
Let us discuss some basic properties of the construction that will be used to prove the lower bound.

For all MDPs in , we have , and . This means there is no uncertainty in the reward function outside of state , which has when and when . The values for are chosen to ensure that all states in have the same value under action 2, given the choices for the other parameters above.

All MDPs in (resp. ) differ only in the choice of . This property, along with the fact that all states in have the same value (guaranteed by our choice of ), ensures that is the same for all (resp. ). Furthermore, our choice for (resp. ) ensures that the optimal action for all states in is action 1 (resp. action 2).

Our choice for and ensures that the marginal distribution of under the process is the same for all . This property is motivated by the hard inference task described in Section 2.1.2, which requires an uninformative marginal distribution.
The exact numerical values for the MDP parameters chosen above are not essential to the hardness result. Any tuple can be used to establish a result similar to Theorem 1, as long as it satisfies five general properties described in Appendix A.
2.1.4 Finishing the Construction: Value Functions and Data Distribution
We complete our construction by specifying a value function class that satisfies realizability and a data distribution that satisfies concentrability Eq. 1.
Value function class
Define functions as follows; differences are highlighted in blue:
(2) 
The following result is elementary; see Appendix B for a detailed calculation.
Proposition 1.
We have for all and for all .
It follows that by choosing , realizability holds for all . Note that this choice of satisfies the standard realizability condition that for all (as in the conjecture of Chen and Jiang (2019)), but—as is—does not satisfy the stronger allpolicy realizability condition that for all policies (as in creftype 1). We handle this setting in Section 2.4.
Data distribution
Recall that in the offline RL setting, the learner is provided with an i.i.d. dataset where , , and (here and are the transition and reward functions for the underlying MDP). We define the data collection distribution via:
This choice for forces the learner to suffer from the hardness described in Section 2.1.2. Salient properties include: (i) both planted and unplanted states in are covered, and (ii) the state is not covered. Property (i) results in overcoverage, which makes estimating the parameters of the underlying MDP from transitions statistically hard, while property (ii) hides the difference between and and hence makes all reward observations uninformative.
We now verify the concentrability condition Eq. 1.

For layer , for all , the distribution of is . It follows that

For layer , for any , the distribution of is . We conclude that
where we have used that .

For layer , for any , the distribution of (denoted by ) is supported on . Therefore, we have
We conclude that the construction satisfies concentrability with .
2.2 Proof of Theorem 1
Having specified the lower bound construction, we proceed to prove Theorem 1. For any MDP , we know from Eq. 2 that the optimal policy has
and that has a constant gap in value between the optimal and suboptimal actions for states in :
(3) 
Meanwhile, the choice of actions at states does not affect the policy value. This implies that any policy with high reward must choose action 1 for almost all reachable states in layer 1 when , and must choose action 2 for almost all such states when . As a result, any offline RL algorithm with nontrivial performance must reliably distinguish between and using the offline dataset . In what follows we make this intuition precise.
For each , let denote the law of the offline dataset when the underlying MDP is , and let be the associated expectation operator. We formalize the idea of distinguishing between and using Lemma 1, which reduces the task of proving a policy learning lower bound to the task of upper bounding the total variation distance between two mixture distributions and .
Lemma 1.
Let be fixed. For any offline RL algorithm which takes as input and returns a stochastic policy , we have
Lemma 1 implies that if the difference between the average dataset generated by all and the average dataset generated by all is sufficiently small, no algorithm can reliably distinguish and based , and hence must have poor performance on some instance. See Appendix C for a proof.
We conclude the proof by bounding . Since directly calculating the total variation distance is difficult, we proceed in two steps. We first design an auxiliary reference measure , and then bound and separately. For the latter step, we move from total variation distance to divergence and derive upper bounds on (resp. ) using a mix of combinatorial arguments and concentration inequalities. This constitutes the most technical portion of the proof, and formalizes the intuition about hardness of estimation under planted subset structure described in Section 2.1.2.
Our final bound on the total variation distance (proven in Appendix D) is as follows.
Lemma 2.
For all , we have
2.3 Discussion
Having proven Theorem 1, we briefly interpret the result and discuss some additional consequences.
Separation between online and offline reinforcement learning
In the online reinforcement learning setting, the learner can execute any policy in the underlying MDP and observe the resulting trajectory. Our results show that in general, the separation between the sample complexity of online RL and offline RL can be arbitrarily large, even when concentrability is satisfied. To see this, recall that in the online RL setting, we can evaluate any fixed policy to precision using trajectories via MonteCarlo rollouts. Since the class we construct only has two possible choices for the optimal policy and has suboptimality gap , we can learn the optimal policy in the online setting using trajectories, with no dependence on the number of states. On the other hand, Theorem 1 shows that the sample complexity of offline RL for this family can be made arbitrarily large.
Linear function approximation
The observation above is particularly salient in the context of linear function approximation, where for a known feature map . Our lower bound construction for Theorem 1 can be viewed as a special case of the linear function approximation setup with by choosing . Consequently, our results show that the separation between the complexity of offline RL and online RL with linearly realizable function approximation can be arbitrarily large, even when the dimension is constant. This strengthens one of the results of Zanette (2021), which provides a linearly realizable construction in which the separation between online and offline RL is exponential with respect to dimension.
Why aren’t stronger coverage or representation conditions satisfied?
While our construction satisfies concentrability and realizability, it fails to satisfy stronger coverage and representation conditions for which sampleefficient upper bounds are known. This is to be expected (or else we would have a contradiction!) but understanding why is instructive. Here we discuss connections to some notable conditions.
Pushforward concentrability. The stronger notion of concentrability that for all , which is used in Xie and Jiang (2021), fails to hold because the state is not covered by . This presents no issue for standard concentrability because is not reachable starting from .
Completeness. Bellman completeness requires that the value function class has for all , where is the Bellman operator for . We show in Eq. 2 that the set of optimal Qvalue functions is small, but completness requires that the class remains closed even when we mix and match value functions and Bellman operators from and , which results in an exponentially large class in our construction. To see why, first note that by Bellman optimality, we must have if is complete. We therefore also require for and . Unlike the optimal Qfunctions, which are constant across , the value of for depends on whether or , where is the collection of planted states for .^{7}^{7}7Recall that is the optimal Qfunction for any and consider where has planted set . For , we have while for , we have . As a result, there are possible values for the Bellman backup, which means that the cardinality of must be exponential in .
2.4 Extensions
Theorem 1 presents the simplest variant of our lower bound for clarity of exposition. In what follows we sketch some straightforward extensions.

Realizability for all policies (creftype 1). The value function class in the prequel satisfies the standard realizability condition that for all as in Chen and Jiang (2019), but does not satisfy the stronger allpolicy realizability condition that for all . This is a simple issue to fix: General policies will have different initialstate values , but otherwise agree with or at every other state.
In more detail, a general policy may select different actions on different states , and the value depends on the probability that selects action 1 or 2 in . In particular, for any , while we have for all , we have
which may not belong to ; MDPs in satisfy a similar expression for .
We address this issue by enlarging using linear function approximation. Define a feature map
Then for any policy , we have
for all . As a result, defining , we have for all . One can verify that this construction has , which proves creftype 1.
Beyond linear function approximation, there are many other ways to enlarge (from ) to obtain interesting consequences. For example, observing that takes on at most different values across all and all deterministic policies , we can simply define , which is a finite class and has for all deterministic . For this new , since , the interpretation of the lower bound should be that samples are required, which is slightly weaker than the condition that in Theorem 1, but nonetheless rules out sampleefficient learning.

Policy evaluation. Our lower bound immediately extends from policy optimization to policy evaluation. Indeed, letting and denote the optimal policies for and respectively, we have for all . It follows that any algorithm which evaluates policies to precision with probability at least for small numerical constants can be used to select the optimal policy with constant probability, and hence must have by our lower bound.
To formally cast this setup in the policy evaluation setting, we take as the class of policies to be evaluated (recall that is constant across all ), and we require a value function class such that for all , . It suffices to select for an arbitrary choice of and , which has . This choice for is admissible because all lead to the same Qvalue functions for , and likewise for .

Learning an suboptimal policy. Theorem 1 shows that for any