The FTC's Lawless Saga to Become a Civil Rights Regulator
The FTC uses an algorithm to guess consumers' likely races and sue for discrimination without authority. The FTC's dubious statistical approach is coming to AI regulation. Here's how it happened.
Welcome back to Competition on the Merits. I promised last issue that I would shift from antitrust to consumer protection issues. With all of the attention antitrust is getting these days, the FTC and CFPB consumer protection issues seem to get a little less scrutiny than they used to. Please subscribe, upgrade to paid, or send to a friend! And as always, feel free to contact me with bright ideas for topics you would like to see covered at Competition on the Merits.
I want to start out with an FTC consumer protection issue that recently came and went without much fanfare. It is quite an involved story. And it is all about the FTC’s four year long campaign to convert itself, without a whiff of legal authority to do so, into a civil rights regulator. The issue has drawn important dissents from Republican FTC Commissioners Holyoak and Ferguson. And it attracted the attention of Senator Cruz, whom has been on top of this particular issue and sent a letter demanding explanation to Chair Khan and the FTC:
But despite Senator Cruz’s efforts along with some excellent statements from several Commissioners over the years, the issue has not received remotely enough attention relative to its import. I was an FTC Commissioner for 3 years, worked for the agency 4 different times, and have studied the Commission for nearly 30 years. And I would rank this lawless and political power grab as among the most pernicious abuses I’ve observed.
Now, flouting the rule of law is not new to the FTC. This is the same FTC that lost 9-0 in FTC v. AMG with the Supreme Court concluding, quite predictably, that language entitling it to a “permanent injunction” did not allow it to obtain equitable monetary relief. This is the same FTC that has lost in two district court’s (and won in one) asserting that its Section 6(g) authority to “make rules and regulations for the purpose of carrying out the provisions” authorizes it to promulgate competition legislative rules. Courts have rejected that view. COTM readers know that result is correct. The FTC has no path to victory and its Noncompete Rule is a dead man walking.
But the FTC has apparently not been chastened by those harsh experiences with judicial review. But there is something different about this one. The FTC is now quietly (at least, so far) exercising authority it does not have to allege that specific companies are engaged in racial discrimination. Yes, the Federal Trade Commission is doing civil rights regulation.
What business does the FTC have as a civil rights regulator? Absolutely zero.
Racism is unequivocally horrid, rancid behavior. So perhaps, one might think, we should just give the FTC some grace here? Surely, the FTC must have some pretty solid evidence of racial animus in these cases to brazenly flout its authority? Actually, none at all. The FTC is deploying a statistical model (Bayesian Surname Improvement Geocoding) to make inferences about a particular customers’ racial profile from last name and location. It then uses those guesses about racial identification to examine whether, on average, a lender is charging different rates to those it assumes belong to different racial groups. The BISG method underlying the FTC’s analysis, and its complaints in two recent cases, do not include ANY evidence that the defendants knowingly (much less intentionally) offered different rates to members of different races. The complaints simply allege that if you make statistically educated – but often flawed (as we will discuss) – guesses about a borrower’s race, and those guesses are correct, then one can conclude the different groups paid different prices on average. Based on that analysis, the FTC has alleged that lenders have engaged in racial discrimination and sued them to the tune of millions of dollars. But if this approach sounds like disparate impact analysis to you, that is because it is exactly that.
By the way, doing so WOULD violate the Equal Credit Opportunity Act (ECOA) – a violation the FTC also charges (and should win). ECOA, prohibits creditors from discriminating against a credit applicant on the basis of race, color, religion, national origin, sex, marital status, age, or because of receipt of public assistance. But the FTC charges an additional “unfairness” count under its consumer protection authority that gives it no additional remedies. Why go through all the trouble when the ECOA gets the same remedies? Stay tuned.
I hope that is enough to get you interested in the story. Let’s start back in 2020.
Planting the Seed for the FTC’s “Common Law” of Civil Rights Regulation
The story starts with my successor at the FTC, Rohit Chopra. Chopra is now the Director of the CFPB. But back in 2020 he was an FTC Commissioner. Chopra is smart, ambitious, and one of the hardest working people I know. We do not agree on much in terms of substantive issues – though there is a small subset of issues upon which we are quite closely aligned. But as a Commissioner who was in the minority for all of my term and set a record for dissenting opinions, I have always respected Chopra for a few things: (1) willingness to vote his conscience on important issues regardless of political pressure; (2) willingness to write his dissents, expose his reasoning to criticism, and actively participate in the marketplace for ideas; and (3) no surprises – if Chopra disagrees with you, you know it because he tells you so.
True to form – Chopra told folks exactly what would happen here. That is admirable. And Chair Khan picked up the playbook and ran with it. Here is Chopra in a speech in 2020 arguing that the FTC’s consumer protection authority prohibiting unfair acts and practices could and should be used to combat racial discrimination precisely because proving those cases was … well … hard. It required evidence of discrimination. Actual evidence is hard. So couldn’t the FTC come up with a shortcut? Yes, says Chopra as FTC-Commissioner:
True to form again, Chopra tells you the entire game plan. Using FTC unfairness rather than ECOA solves three separate problems: (1) it opens the door for the FTC to get into the disparate impact analysis game; (2) ECOA and the Fair Housing Act are limited to specific sectors of the economy where as the FTC’s unfairness authority allows it to deploy this approach to reach the entire economy – Congressional delegation be damned; and (3) do not miss this one – “machine learning and AI” are coming and Chopra (then) and Khan (now) would like the FTC in position to regulate it to death. They’ve already started to weaponize the FTC against Little Tech.
Chopra planted the seed at the FTC. And left to the CFPB. We will check in again on his parallel plan to run the same gambit at the CFPB, attempting to establish it too as a civil rights regulator using that agency’s unfairness authority – modeled after the FTC’s.
But the seed Chopra planted at the FTC would grow as intended. First, in March 2022, the FTC settled a case with an auto dealer with Chair Khan and Commissioner Slaughter issuing a separate statement indicating they would have preferred adding a Section 5 unfairness account alleging the auto dealer discriminated against Black consumers. They did not have a third vote for this proposition at the time. Their statement copies Chopra’s blueprint and encourages the FTC to establish the legal foundations for its role as a civil rights regulator with its unfairness authority under the FTC Act.
But by October 2022, the Khan FTC had the votes they needed to take the next step. The FTC settled another auto dealer case. This time the FTC voted 3-2 to authorize a count alleging discrimination as unfairness in Passport Automotive Group. The plan had been implemented and the FTC would start to create civil rights policy through its consumer protection authority. The issue was not litigated – settling parties have a lot of reasons to sign these consents. And the FTC imposes zero remedies additional to ECOA. But by bullying defendants into consents that include unfairness counts the FTC has a lot to gain.
For readers familiar with the FTC’s mission to expand its unfairness authority to reach privacy regulation, this should sound quite familiar. The FTC has a long history of handling uncertainty about its authority with a process that goes something like this:
First, leverage the FTC’s power over defendants and extract settlements that concede the agency’s unfairness authority.
Second, stack up these settlements over time. Adding new relief where possible and where the settling party will agree to it. Pretty soon you have a large stack of settlements employing the FTC’s unfairness or unfair methods authority to a new area: privacy, data security, competition.
Third, it is critical not to litigate these cases. A court might expose the vulnerabilities in the FTC’s claimed authority. Remember, we are settling because we have leverage over the defendants (perhaps they violated another statute, perhaps being bound down in three years of litigation with the FTC is more costly than settling).
Finally, point to the large stack of settlements and claim that they are the “new common law” of the FTC. Of course, this approach has nothing to do with the common law, and produces none of its benefits or values, as Rybnicek and I have pointed out in the competition context. But this approach serves the FTC well. Claim that the settlements mean everyone should be on notice that the FTC claims this authority, and that they set the standard of conduct companies owe under the FTC Act. Without litigation, this “common law” can quickly become a living, breathing thing at the agency.
The time to stop this creation of de facto regulatory authority and agency mission creep is at the outset – forcing the conversation of whether the authority exists in the first place.
Is Disparate Impact Discrimination an Unfair Act or Practice Under the FTC Act?
No.
And both Republican Commissioners dissented from the discrimination count in Passport Automotive Group. In particular, Commissioner Noah Phillips’s excellent dissent lays out precisely why the text, structure, and history surrounding Section 5 of the FTC Act, including the 1938 Lea-Wheeler amendments, make it clear that unfairness authority does not address discrimination. It compares the FTC Act to statutes where Congress actually says it is attempting to make discrimination unlawful and finds the former lacking each of the legal indicia one would expect in an antidiscrimination statute. It is worth reading in its entirety. But I want to focus on two fatal flaws in the Chopra / Khan disparate impact theory of unfairness part.
Recall – or learn for the first time – that the FTC’s unfairness authority in Section 5(n) of the FTC Act was codified by Congress in 1994. It states that an act or practice is unfair if it “causes or is likely to cause substantial injury to consumers or competition which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or competition.” In addition to that three part test it also specifically prohibits public policy considerations from serving as the primary basis for an unfairness determination.
The first fatal flaw is about what is missing from Section 5 of the FTC Act. Phillips’ Passport Automotive Group dissent emphasizes that the Chopra-Khan theory is one of disparate impact as opposed to disparate treatment. That is, it is a legal theory that declares unlawful an otherwise neutral policy that has the effect of disproportionately impacting a protected class. As Phillips points out, some antidiscrimination laws allow disparate impact claims. Some do not. But the Supreme Court in Texas Dep’t of Hous. & Comty. Affs. v. Inclusive Communities Project, has ruled that “antidiscrimination laws must be construed to encompass disparate impact claims when their text refers to the consequences of actions and not just the mindset of actors, and where that interpretation is consistent with statutory purpose.” Section 5 of the FTC Act fails that test. And it is not close. Here’s former Commissioner Phillips’ analysis:
No duck, indeed.
The second fatal flaw is about what is missing from the Chopra-Khan theory. Let’s go back to the statute. Chopra, and later Khan, have defended the theory on the ground that the three part test for unfairness is satisfied by the alleged discrimination. They breeze through the test, arguing that clearly racial discrimination in car prices causes substantial injury, and that racial discrimination in car prices does not offer countervailing benefits, nor can the victim of discriminatory pricing avoid it. Voila, they say! A 5(n) violation.
Not so fast, my friends. What are we talking about here? The statute requires an ACT OR PRACTICE. It requires some sort of business conduct. And here, the FTC alleges precisely what is unlawful? Disparate impact. Not treatment.
The FTC does not allege an act or practice results in substantial injury. The FTC alleges the outcome itself is unlawful. That is, the FTC argues that the fact that Latino consumers pay more than non-Latino White consumers is the act or practice that violates Section 5. That is not how this works. The FTC’s analysis skips identifying an act or practice to then apply the unfairness elements to. (For extra credit: Ask yourself why that might be? We will come back to it). Instead, it points to the disparate impact as the act and the harm. Here’s Phillips making a similar point:
That’s right. I’d press this point further than Phillips does, however. The pleading not only fails as a matter of Section 5, it also seems to fail to provide even cursory notice to the defendant concerning what acts it took to allegedly violate Section 5. “Impose costs” does not almost surely is inadequate. But this is precisely the sort of thing an agency can get away with when its legal mission is expanded carefully and deliberately through settlement practice rather than litigation.
For what it is worth, court’s have ultimately vindicated Phillips’s view of unfairness and in so doing thrown a wrench in Chopra’s attempt to make the same play at the CFPB.. In Chamber of Commerce v. CFPB, the district court struck down the CFPB’s attempt to police discrimination in the context of the financial services market. The court concluded the CFPB’s unfairness authority, modeled after the FTC’s, did not reach discrimination: “[e]ven if an agency’s ‘regulatory assertions had a colorable textual basis,’ a court must consider ‘common sense as to the manner’ in which Congress would likely delegate the power claimed in light of the law’s history, the breadth of the regulatory assertion, and the economic and political significance of the assertion.”
I want to pivot from Chopra’s original idea of the FTC as a civil rights regulator in 2020, and from Chair Khan’s attempt to execute on the plan through settlements designed to establish and build acceptance of the legal authority in 2022, to the current state of play in 2024 and why it matters.
But first we need a very short detour to discuss Bayesian Improved Surname Geocoding.
What the Hell is Bayesian Improved Surname Geocoding (BISG) And Why You Should Care
Let’s start with what BISG is and where it came from. Marc Elliott is a researcher at the RAND Institute. One of the problems that Elliott has worked on throughout his career is figuring out the impact of various health interventions and other treatments by race and ethnicity when data on race and ethnicity are missing. Like researchers in other areas, Elliott worked on methods for imputing missing data.
There are several different methods researchers use to impute race and ethnicity data when it is missing. One of those methods is “geocoding,” which refers to using a person’s address to link them to census data about the geographic areas where they live. For example, I live in the 22101 Zip Code in Mclean, VA, which has associated with it all sorts of demographic information. And if all a researcher knew about someone was the Zip Code they could update their guess on race and ethnicity based upon the distribution of race and ethnicity in that geographic area. Easy enough – this is just using the zip code to improve our guesses about race and ethnicity, but it could be estimates about any other information collected at the zip code level as well.
A second method is “surname analysis,” which refers to a similar sort of extrapolation from one’s surname to a particular group (as defined by race, ethnicity or origin). It is the same basic idea – when a surname belongs almost exclusively to one racial or ethnic group, one can update priors on the probability that person with the surname belongs to that group.
It turns out that each method has some flaws – like most methods of data imputation. After all, the idea is to use information we have to make educated guesses about data we do not have. The quality of the guesses are going to depend on: (1) the quality of the data we do have; and (2) the quality of the inferences we can make from the data we do have on to the characteristics we are trying to predict. In the case of geocoding, it does a particularly poor job of distinguishing blacks from non-Hispanic whites. In the case of surname analysis, it does a particularly poor job of distinguishing Hispanics or Asians.
Elliot and his co-authors jammed the two methods together. Here’s a paper describing the hybrid approach. But the basic idea is simple – the authors use surname lists to update the geocoded information in order to predict probabilities that a particular person (with a particular address and surname) is of a certain ethnicity. James Johnson lives in 92119. The BISG method predicts the probability that Mr. Johnson is Asian given that information.
This is really an application of Bayes’ Rule – that is, updating the prior probabilities of membership in race/ethnic categories based upon geographic information with surname lists to establish updated probabilities. The BISG method improved upon other methods in terms of prediction based upon combining these two sources of information. One can imagine using datasets imputing race and ethnicity information to improve our understanding of health treatments on different groups, for economic research, and more. But it has some pretty important limitations. For our purposes, the most salient is that it really is not designed to predict individual races.
Just for example, BISG appears to overestimate or overpredict Hispanics and black prevalence in a sample while underestimating whites and Asians. From Elliot et al (2009):
BISG first encountered policy salience when the CFPB back in 2014 (then under Richard Cordray) started implementing it to establish disparate impact in ECOA claims. The House Committee on Financial Services caught on to this, held hearings, and ultimately issued a report “CFPB Junk Science and Indirect Auto Lending,” calling attention to and criticizing the CFPB and the BISG method for this purpose. They emphasized that the errors here were not being made for research purposes – but rather to determine which lenders were guilty of discrimination and which buyers should be compensated. One well known anecdote included a WSJ report of a white man in Alabama receiving a settlement check.
Elliott himself recognized these limitations, acknowledging that BISG had real problems when it comes to predicting whether a particular person is black or white – but stressed its value for research questions where knowing average effects was very useful. There have been other criticisms and analyses of BISG estimation errors. One analysis reported “a 20% overestimation of African Americans,” and the Financial Services Committee aired internal CFPB documents indicating they understood many of these weaknesses.
But for our immediate purposes – the key to understand here is that what BISG does, what it generates, is an estimate of the probability that a particular person belongs to a racial or ethnic group. That is what it does. What it does NOT do is identify whether a particular person does belong to a particular racial or ethnic group. So BISG might be useful for substantiating claims like: “on average, this auto dealer sold cars to Group A at a higher price than Group B.” Those averages will be built upon estimates and predictions that, with a large enough sample, will be acceptably precise and give us some information about general tendencies in the data. But it is not useful to substantiate claims like: “The auto dealer who sold this car for $2000 to a specific person in Group A would have sold it for $1500 to a person in Group B.”
The key is that BISG does not tell us anything about individual acts or practices. The issue is that the CFPB first, and now the FTC, are using BISG for precisely what its creators warned about: predicting the race of an individual.
The Khan FTC, BISG, and Section 5 as a Civil Rights Statute
The FTC has recently filed two complaints alleging discrimination in auto lending: Coulter Motors and Asbury Automotive Group. The big difference between the two is that Coulter Motors is another consent decree whereas Asbury Automotive is headed to administrative litigation at the FTC.
Coulter Motors walks and talks like the earlier FTC 5(n) unfairness settlements seeking to establish the FTC as a civil rights regulator. Recall in Passport Automotive the FTC alleged the auto lender “imposed costs” that resulted in a disparate impact on Latino car buyers. The FTC does the same here.
It alleges the disparate impact itself as the underlying “act or practice” that violates Section 5. And as with Passport Automotive, the FTC alleges an ECOA claim – meaning the Section 5 unfairness claims adds absolutely nothing to the remedies available to the Commission.
I told you above to start thinking about why the FTC would allege a completely superfluous unfairness claim when it also pleads an ECOA violation. Time for your answer.
If you had on your answer sheet something like: “because to complete the Chopra-Khan plan to establish the FTC as a civil rights regulator, one needs to establish the legal authority. ECOA is limited to cars. And the plan is to make the FTC the most powerful civil rights regulator in the country – indeed the unfairness theory would allow it to find violations where many antidiscrimination statutes would not. To do so, the FTC wants to create a “common law of discrimination,” with its unfairness authority much like it tried to do in the privacy space.” That’s a long answer – but that gets all the points.
Fortunately, the FTC’s two Republican Commissioners are excellent lawyers, and like Phillips, see the problems inherent in Khan’s gambit. If I learned one thing while at the FTC in the minority it was that often the most aggressive attempts to expand agency power were hidden in consent decrees. “But the parties agreed to it, Commissioner!” was a common plea masquerading as a serious argument.
Commissioner Holyoak’s dissent agrees with Phillips’ earlier analysis in Passport Automotive, emphasizing how the district court’s analysis in Chamber of Commerce v. CFPB is especially problematic for the FTC’s ambitious claims about the scope of its unfairness authority. But perhaps most importantly, Holyoak draws a direct line between the FTC’s lawlessness in expanding its unfairness authority and its failing attempt to expand its unfair methods of competition authority to impose a full ban on Noncompetes. This is a powerful paragraph.
For my money the last two sentences identify the heart of the problem: the FTC majority is long on creative thinking and ambition when it comes to expanding authority beyond what Congress has delegated but remarkably short on the analytical input and work to get there. Simply reciting the language of the statute, increasing the volume each time, and hoping for a different result is not going to cut it. Not for the Noncompete Rule, as we have explained. And not here.
Commissioner Ferguson’s dissent completes the 1-2 punch combination. You should read it. After expressing some reservations that disparate impact claims are cognizable even under ECOA, and walking through the state of the law on the question of whether ECOA itself satisfies the Court’s requirements for disparate impact claims in Inclusive Communities, Ferguson votes yes on authorizing the ECOA claim under the “reason to believe” standard. But the key for our discussion here is Ferguson’s discussion of the Section 5 discrimination as unfairness claim:
I have not yet mentioned the FTC’s second discrimination in auto lending complaint: Asbury Automotive. The Complaint came one day after Coulter Motors and alleges conduct substantively identical to it. Let’s look at the Complaint:
The Asbury Complaint looks more or less exactly identical to the Coulter Motors Complaint EXCEPT for the fact that the former does NOT allege an unfairness count for disparate impact discrimination. If you have been following the FTC for the past decade or so you have already figured out why not. Mission creep of the sort the FTC is attempting here is executed through consents not litigation. And the defendants in Asbury Automotive were willing to litigate rather than settle.
Yes, the FTC will rule for itself in Asbury. But this means that a court of appeals (of the defendant’s choosing) will ultimately decide the question rather than the Commission. And that makes all the difference in the world. Commissioner Ferguson bring this point home very nicely:
That is exactly right. The FTC go-to has long been to create what looks and feels like a “common law” approach to novel authority. But it will keep it guarded from the courts. It did so with privacy and data security. It did so with unfair methods to competition. Ferguson’s skepticism of the FTC’s “law through settlement” approach is well founded and well put.
As Rybnicek and I have argued at length, the so-called “common law” FTC approach has none of the virtues of that system and generates new and significant costs as bureaucratic whims change with the times, though ever-expanding. Judicial review plays an important role in disciplining these shenanigans. And the FTC has been on the receiving end of that discipline as of late. But the availability of administrative litigation combined with the FTC’s willingness to flout judicial review by only letting its new toy theories out to see the world in the settlement context exacerbate the problem. They also lay bare the FTC’s procedural games for all to see. This is not a majority confident in its legal analysis. Nor with enough respect for the rule of law and the parties it regulates to expose those ideas to judicial review. They are the person who talks the most loudly and with the most bravado about their willingness to join the fight but do not put themselves at risk until it is over.
A $64 Billion Question:: Is All of this Really About Regulating AI?
The administrative state is coming for AI. The Biden-Harris administration has made clear that it stands on the side of top-down regulatory mandates for AI rather than letting 1,000 flowers bloom and compete. This is already running long so here is not the place to run through the various proposals that have been made to bring “fairness” to AI and machine learning through regulation. Chilson & Thierer have a nice summary here.
The Biden-Harris Executive Order on combating racial discrimination through algorithms is just one example.
The Executive Order instructs agencies to focus their civil rights authorities and offices on emerging threats, such as algorithmic discrimination in automated technology; improve accessibility for people with disabilities; improve language access services; and consider opportunities to bolster the capacity of their civil rights offices. It further directs agencies to ensure that their own use of artificial intelligence and automated systems also advances equity.
The FTC has heeded the call in a variety of ways, from a rulemaking proceeding on regulating commercial privacy that addresses “algorithmic bias and justice,” and is openly considering additional rulemaking to ban “any system that produces discrimination.” Nearly all of these rulemaking endeavors are grounded at least partially in the FTC’s unfairness authority – just like the Coulter Motors consent.
Chair Khan has already claimed that targeting discrimination with AI as within the FTC’s enforcement purview targeted racial discrimination as a target of the FTC’s AI enforcement priorities. That may not sound so bad. Racial discrimination is abhorrent. It should be weeded out. The impulse to fight it with all tools available is an understandable one.
But when one understands what the FTCs prolonged pursuit of disparate impact enforcement powers where none were granted to it by Congress, a more problematic picture emerges. The FTC’s first use of these controversial powers are not to attack actual racial discrimination but rather to hunt out statistical differences in outcomes, attribute all differences to discrimination without further examination, and bring the vast powers of the FTC to bear in an effort to shape the economy in a way three Commissioners deem to be sufficiently fair. Congress simply did not grant the FTC such authority. Congress knows exactly how to grant antidiscrimination powers went it wants to do so.
As Commissioner Ferguson observes, it is a dubious assertion indeed that “Congress would have worked so hard to adopt our suite of federal civil rights laws—and why so many Americans struggled tirelessly for their passage—when it had already given the Commission the power to proscribe any sort of discrimination it wanted to proscribe.” For all its posturing, the FTC understands this as well and only asserts this authority in consents rather than in litigation. No profiles in courage here.
What comes next? More consents to cement the “common law of discrimination” at the FTC. More claims that the settlements create a “well understood” acceptance that the FTC has this authority. This is the FTC’s mission creep playbook.
What else? Undoubtedly proposed rulemaking targeting algorithmic discrimination in AI that builds upon the FTC’s disparate impact theory and application of BISG. Rulemaking that puts the FTC in the room as AI is developed, that puts regulatory risk and burden on Little Tech rather than an environment that allows innovation to thrive.
What can a company do to avoid liability? One can counsel a company to avoid actual discrimination. But under the FTC’s disparate impact theory of unfairness, there is no telling what the FTC will find unlawful. Quite literally any algorithmic outcome that impacts any two groups differently is potentially unlawful. It is hard to imagine one that does not. The FTC theory of discriminatory unfairness has no limiting principle and the FTC does not appear to take seriously the limits already imposed upon it by the law. This means one very predictable, and unfortunate, consequence is that more and more agency resources will be diverted away from its actual mission of protecting consumers and instead allocated toward pursuits that will ultimately be undone by the courts.
Keep an eye on the FTC’s continued attempt to create and expand its role as a civil rights regulator. The FTC is committed to this pursuit – with or without Congress delegation – and whether or not it means consumers are left worse off. Look for it to become more common in settlements. Look for greater application of BISG and related methods to identify statistical disparities where there is no proof of discrimination. Look for more Congressional oversight. Look for it in AI rulemaking to put the FTC in control of the details of innovation. Look for it everywhere but in litigation where the agency will have to defend its authority. Because they do not have it.