Did the Government Really Spend $830 Million on a National Parks Survey?
A deep dive into the canceled contract Elon Musk called “a $10 survey for a billion-dollar price tag.”
“It was a one-page, ten-question survey — something a middle schooler or SurveyMonkey could do for free. They tried to charge $830 million.”
Elon Musk’s Department of Government Efficiency (DOGE) alleged that the Department of the Interior’s Federal Consulting Group (FCG) brokered two costly contracts: $75 million to design federal website customer satisfaction surveys, and an $830 million contract to conduct those surveys. While these figures originated from DOGE announcements (Work | DOGE: Department of Government Efficiency), they have since been echoed by other sources. For example, E&E News (Politico) reported that Interior Secretary Doug Burgum identified and halted “an $830 million survey contract” for a website satisfaction survey, remarking that the survey “looked like it could have been produced by anyone’s child or artificial intelligence” (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO). Burgum confirmed this contract was set up through Interior’s FCG – a unit offering consulting and customer satisfaction measurement services to other agencies (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO). In a White House Cabinet meeting, Burgum even noted the survey was just a one-page form with “ten questions that anyone’s child in junior high could have put together, or AI could have done for free”, underscoring how basic the product was (Trump's Cabinet reveals 'nonsensical' contracts it has canceled | Fox News).
Crucially, this $830 million contract never fully took effect – it was canceled before signature once discovered by Interior and DOGE (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO). Multiple mainstream outlets have cited the $830 million figure in covering the new administration’s spending cuts, effectively verifying that such a contract was in process. For instance, Fox News and Yahoo News highlighted the “$830 million on surveys” that were canceled as part of Trump and Musk’s crackdown on waste (Trump's Cabinet reveals 'nonsensical' contracts it has canceled | Fox News). However, the earlier $75 million design contract is less documented in independent media. That number comes from DOGE’s own disclosures (Work | DOGE: Department of Government Efficiency), and we did not find a separate press or database confirmation of the $75M award. It likely refers to a prior award (under the previous administration) to develop or pilot the customer satisfaction survey system. In summary, credible sources outside of Musk’s team have reported the $830M survey contract’s existence and cancellation (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO), whereas the $75M design phase is mentioned mainly in internal communications (with no contradiction from available records). We also attempted to verify these contracts via federal procurement databases. Because the $830M contract was canceled pre-award, it does not appear as an active award in public databases like USASpending.gov or FPDS. Nonetheless, the public reporting by multiple outlets and officials confirms the scale and nature of these planned contracts (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO) (Trump's Cabinet reveals 'nonsensical' contracts it has canceled | Fox News).
Cost Comparison: Typical Survey Project vs. $75M/$830M
The price tags – $75 million just to design surveys, and $830 million to execute them – seem extraordinarily high when compared to standard industry costs for large-scale surveying. In the private sector (or even other government contexts), creating and running a national customer/visitor satisfaction survey is far cheaper. Companies can use off-the-shelf Software-as-a-Service tools (like SurveyMonkey, Qualtrics, Google Forms, or enterprise CX platforms such as Medallia) to reach millions of users at a fraction of that cost. These platforms offer online survey design, distribution (via web links, email, or pop-ups), and real-time analysis dashboards essentially as turnkey solutions. For example, an enterprise license for a top-tier survey platform like Qualtrics might run in the hundreds of thousands to a few million dollars per year, even for very large user bases – nowhere near hundreds of millions. In fact, a recent government contract underscores this point: in early 2025 the U.S. Customs and Border Protection (DHS) awarded a 5-year contract for an enterprise Qualtrics survey platform totaling about $6.4 million (roughly $1.3M per year) (DHS CBP inks new 5-year $6M Qualtrics Platform deal on SEWP | OrangeSlices AI). This deal, which covers robust survey capabilities for a major federal agency, suggests that modern feedback tools can be obtained for single-digit millions, not hundreds of millions of dollars.
To put scale in perspective, even if a survey were distributed to millions of people, the marginal cost per response would be trivial – online survey infrastructure and cloud storage cost only pennies per response. Platforms like Google Forms are essentially free (though not FedRAMP-authorized for sensitive government use), and even a highly secure, FedRAMP-compliant tool like Qualtrics or Medallia would be in the low millions for unlimited usage. We can contrast this with the Interior Department’s plan: the canceled project was to span several years and cover many agencies, but at $830M it would have cost on the order of $40–$200+ million per year (depending on actual duration) – vastly out of line with typical pricing. Even accounting for extra needs – e.g. custom development, intensive data security, integration with government systems, or employing analysts to interpret the data – the cost discrepancy is enormous. Hiring a team of data scientists or consultants to analyze survey results might add a few million dollars, not hundreds of millions. For a rough comparison, a private firm could conceivably budget, say, $5–10 million to develop a bespoke survey platform with high security, and perhaps a similar amount annually to operate and continually survey the public (including staffing, support, and analysis). Over a five-year period, that might total on the order of $50 million – still an order of magnitude lower than $830 million.
In summary, by any reasonable benchmark, $830M (and even $75M just for design) far exceeds the expected cost of a national-scale satisfaction survey program. Mainstream enterprise solutions demonstrate that multi-million-dollar budgets are sufficient for large-scale surveying. The fact that CBP can cover its needs with about $6M/5yr (DHS CBP inks new 5-year $6M Qualtrics Platform deal on SEWP | OrangeSlices AI) illustrates how inflated the $830M figure appears. Even considering government-specific overhead or contracting inefficiencies, it’s difficult to justify a survey initiative costing 100× more than what a turn-key solution might cost. This stark contrast fuels skepticism that the original contract pricing was efficient or necessary.
Procurement Records and Scope of the $830M Survey Contract
We searched public contracting records to uncover any solicitations or documentation related to the $830M survey project. Indeed, a trail exists on SAM.gov (the federal procurement portal) from late 2024. In October 2024, Interior’s Interior Business Center (IBC) – which housed the Federal Consulting Group – issued a Sources Sought Notice (a type of request for information) for a “Customer Experience” contract vehicle (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). The notice (ID 140D0425Q0009) outlined a plan to establish a Blanket Purchase Agreement (BPA) or similar vehicle over a base year plus four option years (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). The scope described in the draft Statement of Work aligns closely with the survey contract in question. It called for a contractor to provide “comprehensive survey services” across the government, including:
Survey tools and distribution – the ability to conduct surveys to capture customer/visitor experience for various audiences, either through self-service (agency uses the tool directly) or with contractor assistance (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). This implies providing a survey platform or software (with required FedRAMP security) that agencies could use for website feedback, program evaluations, etc.
Analytics and reporting – delivering analysis of survey results with quality metrics, dashboards, and insights to help agencies improve their services (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). Advanced analytics and possibly AI-based text analysis might be in scope.
Consulting support – experts to advise agencies on how to improve customer experience and satisfaction, and to help implement survey findings into process improvements (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). Essentially, a pool of CX (customer experience) consultants on call.
Continuous improvement and new tech – the SOW even mentions adopting new technologies and “development of better survey services,” implying the contractor should help innovate over time (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI).
In short, Interior was looking for one or more vendors that could cover the entire lifecycle of federal customer satisfaction measurement – from providing the survey platform (licenses, cloud hosting) to potentially designing questionnaires, collecting data, and analyzing/reporting results. The scope included measuring satisfaction with websites, programs, and processes across the federal government (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). Because this was a government-wide initiative, the contract’s potential value was set very high (to allow many agencies to place orders against it). The period of performance was five years total (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI), which explains how the spending could accumulate to the hundreds of millions if every agency made heavy use of the contract.
Following the sources-sought stage, the Interior Business Center would typically issue a formal Request for Proposals (RFP). It appears the procurement moved forward sometime in late 2024 or early 2025. However, by March 2025 – after the change in administration – Interior dissolved the FCG and aborted this contract. According to Interior Secretary Burgum, the contract was canceled just before it would have been signed (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO). This timing suggests that a specific vendor selection was likely imminent (or even decided) for the $830M award. We attempted to find the award notice or a contract number via FPDS (Federal Procurement Data System) or USASpending, but presumably because the contract was never finalized, no award entry exists in those systems. Instead, what we have is confirmation from officials that the contract was about to be awarded (“was going out after you were inaugurated, sir” as Burgum noted (Trump's Cabinet reveals 'nonsensical' contracts it has canceled | Fox News)) and then was stopped.
It’s worth noting that the $830M figure likely referred to the ceiling value of a multi-year, multi-agency contract rather than a one-time expenditure. Such contracts are often Indefinite Delivery/Indefinite Quantity (IDIQ) or BPAs with a maximum value. In practice, spending can end up lower. In this case, $830M may have been the maximum if a large number of surveys and services were ordered by many agencies. (Some commentary suggests the combined project had a $905M ceiling: $75M already on design, and $830M for execution (Grok on X: "@jgmac1106 @charliekirk11 @DOGE The IDIQ contract ...), though that commentary is based on the same internal data). Regardless, the scale was extraordinary for a survey program. Publicly available records (like the SAM notice) do confirm the project’s scope and intent – a centralized “Customer Experience” contract for government-wide use (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI) – even if the internal budgeting ($75M design, $830M implementation) comes to us via DOGE’s transparency effort. We did not find a posted justification or official rationale document for the $830M contract, likely because it was never awarded (agencies publish formal Justification and Approval documents for certain non-competitive awards or high-price contracts, but here the process was terminated early). In absence of that, we rely on descriptions from Interior and DOGE that portray the contract as duplicative and overpriced.
In summary, our research located the procurement trail leading to the $830M survey contract: Interior’s FCG pursued a five-year, government-wide Customer Experience Surveys vehicle to handle citizen feedback across agencies (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI) (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). The contract was on the verge of award in early 2025 when it was canceled for being unnecessary. The lack of an actual contract award record suggests it was stopped just in time, leaving behind only the planning documents and the initial $75M “design” expenditure that had already occurred.
Why Cheaper Options Weren’t Chosen: Procurement Constraints and Practices
Given how much cheaper it could be to use existing survey solutions or smaller contracts, one might wonder why the government arrived at a $830M approach. There are several systemic reasons in federal procurement that can lead to such outcomes:
Interagency “Broker” and Overhead: The involvement of the Federal Consulting Group itself hints at extra cost. FCG operated as a fee-for-service franchise fund inside Interior – essentially, one agency charging other agencies to manage a contract on their behalf. DOGE characterized FCG as an entity “where one government department charges another to broker consulting contracts” (Work | DOGE: Department of Government Efficiency). This means whenever an agency wanted customer satisfaction surveying, instead of buying a SurveyMonkey subscription directly, they might go through FCG, which in turn would hire a contractor and add its own administrative fees. This layering can inflate costs. It also introduces potential misaligned incentives: FCG had an interest in building a very large program (since its “business” was to broker these services). Thus, it may have structured the contract to be as expansive (and expensive) as possible, covering every imaginable agency need, to maximize usage and its own revenue. Once the project was in FCG’s hands, cheaper alternatives like using a simple Google Form or a small-business contractor were likely never seriously considered – agencies were funneled into the “one-stop” contract vehicle which, while convenient, came at a premium.
Contract Bundling & Large Vehicle Contracts: The survey initiative combined the needs of dozens of agencies into one giant procurement. In federal contracting, this is known as consolidation or bundling of requirements – instead of each agency or department running a separate small contract for their surveys, one office (FCG) tried to create a single omnibus contract to serve all. Such bundling often leads to very high contract ceilings (to accommodate all potential orders) and tends to exclude smaller firms. By bundling numerous survey projects into one $830M package, the government essentially limited competition to only big contractors capable of handling a nationwide, multi-agency scope. Small businesses or freelance experts, who could easily handle individual survey projects, would not have the capacity or past performance to bid on the entire bundle. Moreover, procurement regulations do allow (and sometimes even encourage) these large vehicles, especially under the banner of “category management” (avoiding many redundant contracts by using one large contract). A precedent can be seen in other large BPAs – for example, GSA created an IT contract vehicle as a “one-stop-shop” for the Air Force and other agencies with an estimated value of $5.5 billion; it was so broad in scope that it was not set aside for small businesses (small firms could only participate as subcontractors or via limited set-aside orders) (GAO Disagrees with SBA: Bundling Analysis Not Required for BPAs - SmallGovCon). In the case of the survey contract, Interior’s approach was similar: a single massive BPA to meet any agency’s survey needs, which by its nature would favor a big vendor (or a few big vendors) and come with large fixed costs. While consolidating can sometimes save money through economies of scale, it can also drive up costs by reducing competitive pressure and by including many “nice-to-have” features for all users. Here, the $830M ceiling was likely a product of bundling everything (platform, support, analytics, multiple years, multiple agencies) into one deal.
Procurement Rules and Risk Aversion: The federal acquisition process is complex and often biased toward caution and comprehensiveness. Officials might shy away from informal solutions like using a free tool or a patchwork of small contracts, because they prefer a formally procured, fully supported solution that checks all the boxes (security, accessibility, reliability, etc.). For instance, while a tech-savvy team could whip up a secure survey system on a modest budget, if it wasn’t on an approved contract vehicle, agencies might not use it. In this case, FCG presumably opted for a Blanket Purchase Agreement under GSA Schedule or similar, which meant only pre-vetted vendors could compete. This can limit competition to a handful of companies. Also, federal surveys dealing with the public often require compliance with OMB/PRA (Paperwork Reduction Act) and data privacy rules – something large contractors are experienced in handling, whereas a freelancer or small startup might struggle with the paperwork. All these rules can make the procurement team lean towards established players and known methods (often pricier) instead of unconventional cheap solutions. The contract solicitation specifically required FedRAMP-certified technology and robust analytics (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI), which not many off-the-shelf free tools have – thereby ensuring the bid would go to a specialized, likely expensive provider.
“Gold-Plating” of Requirements: It’s possible the contract was over-engineered. The language in the SOW suggests the government wanted not just a survey tool, but cutting-edge capabilities and consulting to “unlock insights” and “establish groundbreaking support” for agencies (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI). Such broad and aspirational requirements can lead bidders to propose large, comprehensive (and costly) solutions. Rather than using a simple survey template, the winning bidder might include fancy AI sentiment analysis, extensive on-site training for every agency, custom integration into dozens of websites, etc. Each of those extras drives up cost. In government contracting there’s a tendency to ask for more than strictly necessary, to ensure all potential needs are covered (“nice-to-have” features get included “just in case”). This scope creep at the requirements stage can yield an unwieldy, expensive contract. By contrast, a lean approach could have been to pilot a survey on one agency’s website with a basic tool, then expand gradually – but FCG’s approach bundled everything upfront.
Acquisition Thresholds and Incentives: Large contracts often have higher administrative overhead (soliciting, evaluating, and managing a big contract costs the government more time and effort). Once a contract is that large, federal procurement officials may feel they need to go with a known quantity – e.g. a big consulting firm or IT integrator with a proven record – rather than taking a chance on a new, perhaps more efficient entrant. There’s also the issue of contract officers not splitting requirements: regulations discourage breaking a requirement into smaller parts just to avoid certain approval thresholds. If FCG viewed all these survey needs as a single requirement, they were obliged to pursue one large acquisition. Furthermore, had they tried a different path – say, hiring a few independent survey experts on small contracts – they might have run into federal personnel and contracting rules (for example, you generally cannot hire “personal services” easily, and using contractors in a staff augmentation way has limits). Thus, the path of least resistance was a big contract. It’s telling that the $830M contract was structured likely as an IDIQ/BPA with multiple awardees (DOGE’s communication implied possibly three vendors were involved at some stage, given mention of a “three-vendor contract” in a related context). Multi-award IDIQs can also inflate perceived cost: the government might award the contract to, say, three companies with a shared ceiling of $830M, expecting to distribute work among them. Each company sees a slice, but the publicized ceiling looks huge. Without competition on each task order, the prices can remain high. We don’t have confirmation on how many vendors were to be awarded, but often for something this size, agencies consider multiple awards to appease competition concerns.
Preferred Vendor Programs: Another angle is that sometimes agencies are steered toward using “preferred” government-wide contracts or incumbents. If, for example, an incumbent contractor (perhaps the one that did the $75M design) had unique insight, they might have been in a favorable position to win the $830M follow-on. If that incumbent was a large firm with high billing rates, the new contract pricing would reflect that. Government agencies also sometimes use GWACs (Government-Wide Acquisition Contracts) or GSA Schedule contracts for speed – FCG might have used an existing vehicle (the sources sought references GSA FEDSIM and others). Using a pre-existing contract vehicle can limit who can bid (only those on the vehicle) and can drive up costs if those contract holders charge more. For instance, the Interior FCG’s survey BPA could have been set up under the GSA Schedule, meaning the pricing is based on GSA’s catalog rates, which might not be the absolute cheapest and include contractor overhead and GSA’s fee.
In essence, the decision not to use a small firm or a simple SaaS subscription likely came down to bureaucratic momentum and risk avoidance. The government chose a large, centralized procurement (with all the bells and whistles) over piecemeal solutions. This inevitably raised the price. Only when Musk’s DOGE team scrutinized it did the sheer scale become a point of contention. At that point, the question of “Why didn’t you just use Google Forms for free?” became very pointed. The answer lies in the layers of process described above: by the time the requirement went through the system, it had become a juggernaut that assumed bigger is better.
Finally, it’s worth noting that lack of transparency and accountability may have played a role. Before Elon Musk highlighted this contract, it seems to have evaded public notice. It was not widely reported or debated, meaning there was little outside pressure to minimize cost. Once exposed, it quickly appeared indefensible (“nonsensical” as one official put it (Trump's Cabinet reveals 'nonsensical' contracts it has canceled | Fox News)) and thus was axed. This suggests that the usual internal checks (budget reviews, OMB oversight) did not flag the project, possibly because it was spread across many agency budgets or justified under “improving customer service” goals. With stronger oversight or earlier public scrutiny, the government might have explored more affordable alternatives rather than defaulting to an $830M contract. The case highlights how federal procurement rules and practices – while intended to ensure quality and fairness – can sometimes inadvertently limit competition and inflate costs, producing outcomes far more expensive than private-sector equivalents.
Sources:
Politico’s E&E News report on the Interior Department canceling the $830M survey contract (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO) (Burgum, Zeldin trumpet cost-cutting in Trump Cabinet meeting - E&E News by POLITICO)
Fox News coverage of Trump’s Cabinet meeting (quotation of Burgum on the “$830 million… ten questions” survey) (Trump's Cabinet reveals 'nonsensical' contracts it has canceled | Fox News)
DOGE (Department of Government Efficiency) communications via X/Twitter (Work | DOGE: Department of Government Efficiency) (referencing the $75M design and $830M survey contracts)
SAM.gov Sources Sought Notice for Interior’s “Customer Experience” BPA (scope of work for surveys and CX tools) (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI) (DOI Sources Sought: Customer Experience BPA | OrangeSlices AI)
Federal contract award data: DHS/CBP’s 5-year $6.39M Qualtrics survey platform contract as a cost benchmark (DHS CBP inks new 5-year $6M Qualtrics Platform deal on SEWP | OrangeSlices AI)
GAO/smallgovcon analysis of bundled large contracts (illustrating a $5.5B one-stop IT contract not set aside for small business)