Alliant 2 Industry Partners: Meeting the Federal Government’s AI Needs

Artificial intelligence (AI) is actively transforming the way we personally and professionally complete tasks of varying complexity. Noted for its ability to enhance productivity, the AI we know today is built on decades of groundbreaking work and has many practical applications. From virtual assistants to smart automobiles to business processes and workflows, artificially intelligent systems are making our lives easier.

GSA is at the forefront of leveraging emerging technologies like AI to enhance the efficiency and accessibility of federal services for the American public. By prioritizing safety and privacy, GSA ensures that AI advancements help improve government operations while mitigating risks. GSA also plays a vital role in supporting the AI Executive Order, reinforcing the federal government’s commitment to the responsible and effective use of AI​​​​​​​​​​​​​​.

While recent advances in generative AI have brought renewed attention to the importance of safe and effective AI deployment, it’s worth noting that for years, GSA’s industry partners have been helping agencies responsibly work with previous generations of AI technologies.

GSA’s industry partners delivering AI solutions

Since its 2018 inception, the Alliant 2 Governmentwide Acquisition Contract (GWAC), one of the most successful IT Services GWACs in federal government history – and a designated Best-in-Class (BIC) vehicle – has delivered high-value IT services to our customer agencies. Alliant 2 is well-positioned to bring critical, real-world AI solutions to the federal government, particularly regarding national defense, health care, and environmental protection.

We recently polled our Alliant 2 Shared Interest Group to learn about some AI projects they’ve implemented for federal agencies. Let’s take a look at a few of those examples.

A more efficient defense

The general welfare of citizens and their protection from external threats is among the federal government’s greatest duties — a confident and stable national defense is critical.
Federal agencies can now take advantage of notable advancements in the AI space of wargaming and simulation training tools. These training methods rely on realistic simulations to perform varied wartime exercises using diverse war zones, equipment, strategies, and conflict scenarios. For context, think of a military simulator home video game on super steroids – adding to it prospective real-world consequences.

Rapid retraining of computer vision modeling is another AI process that can support national interests. It involves the use of AI to markedly increase the accuracy of those models, eliminating potential issues. The Department of Defense has employed this technology to improve the performance of autonomous reconnaissance vehicles.

Our talented Alliant 2 industry partners also offer geospatial intelligence (GEOINT) and systems intelligence (SIGINT) experience. GEOINT blends machine learning with visualizations to analyze activities on Earth for national security purposes, and SIGINT involves the monitoring and interception of signals from systems used by adversarial targets. Both of these intelligence functions are independently significant. However, as a unit, GEOINT and SIGINT offer a robust approach to national security.

Impacts on health outcomes

Alliant 2’s AI capabilities extend beyond national defense; they also include protections for the health and welfare of U.S. residents. Our industry partners have contributed to many advancements in health care, including the use of AI for COVID-19 research and development. They are also involved in efforts to improve cancer diagnostics and drug labeling review processes using AI.

Different approaches to environmental protection

Executive Order (EO) 14096, Revitalizing Our Nation’s Commitment to Environmental Justice for All, directs the federal government to protect the environment. The EO expressly states that “an environment that is healthy, sustainable, climate-resilient, and free from harmful pollution and chemical exposure” is a fundamental responsibility of the federal government on behalf of its citizens.

Our industry partners actively support the resolution of climate-related concerns addressed in EO 14096. One effort involves the use of AI to accelerate the speed and accuracy of weather forecasts. During the process, scientists at the National Aeronautics and Space Administration evaluated data collected from the climate models to predict weather events that could occur weeks to decades into the future. This is significant, as this technology is also helpful in disaster preparedness and forecasting natural disasters.

Winning partnerships

Alliant 2’s BIC status signifies its relevance as a top-tier IT services vehicle that is propelled by the best and brightest in the industry. We are very honored to partner with companies that are well-vetted and highly qualified to move us into the future of IT services, always keeping us ahead of the curve.

Moving forward with AI

AI is beginning to play a significant role in how the federal government gets things done. Its significance is becoming more evident by the day. Whether improving the effectiveness of our military, improving health outcomes for U.S. citizens, or keeping our water safe, federal agencies are using AI to enhance our well-being. GSA and our Alliant 2 industry partners continue to move that needle, and we look forward to continuing to serve the AI and IT services needs of our customer agencies. To become familiar with other federal government AI projects, please visit

For more information on the Alliant 2 GWAC, please click here.

Follow ITC on LinkedIn and subscribe for blog updates.

Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide now available

Goal: Help agencies buy Generative AI

Artificial Intelligence (AI) is one of the most profound technological shifts in a generation or more. If we learn how to harness its power correctly, AI tools could significantly strengthen how the federal government serves the public.

Seeing AI’s potential – and its risks –  the president signed Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence (AI EO) on October 30, 2023. 

Since it was signed, there has been a lot of activity around highlighting AI use cases and increasing the AI talent and skills in the federal workforce.

I blogged about the procurement considerations it emphasized and we explored the pivotal role of the chief AI officer

The AI EO also sparked an ongoing effort to guide responsible artificial intelligence development and deployment across the federal government. 

Section 10.1(h) of the AI EO asks GSA to create a resource guide to help the acquisition community procure generative AI solutions and related specialized computing infrastructure.

In this post, I’ll describe our new Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide and highlight some of the specific content.

A Focus on Generative Artificial Intelligence

As many of you know, some of the most popular promising tools in the broader field of artificial intelligence are in the field called generative AI.

Fundamentally, generative AI tools are software. It is starting to show up in our email and word processing programs, the search engines we use every day, and the more sophisticated software that agencies rely on. These tools can be helpful for many agencies trying to automate simple tasks or solve complex problems. We’ve seen agencies using generative AI tools to write summaries of rules, create first drafts of memos, and make more helpful chatbots. And many more uses are spooling up right now including using generative AI tools to write computer code and develop new training scenarios for agency staff.

These generative AI tools are getting better and more agencies are asking their contracting officers to help procure the right solutions. 

At the most basic level, because generative AI tools are software, acquiring them must follow the same acquisition policies and rules as other IT and software purchases. Contracting officers should consider cybersecurity, supply chain risk management, data governance and other standards and guidelines just as they would with other IT procurements.

At the same time, generative AI tools are unique. We are all hearing about the risks of generative AI solutions, some of which we talk about in the guide – from bias in how the systems were trained… to “hallucinations” where a generative AI tool states wrong information that it just made up. 

Contracting officers play a critical role in ensuring commercial generative AI offerings conform with federal and agency guidance, laws and regulations and have the right safeguards and protections while enabling their agencies to get the most out of generative AI projects.

We put together the Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide to help contracting officers and their teams understand how to do just that.  

Practical Tips for the Acquisition Community

Because the field is emerging and the use cases are diverse, it’s impossible to provide guidance that applies to every situation. So the guide offers questions that contracting officers should ask and a process to use when scoping a generative AI acquisition. 

The guide also makes a few specific recommendations of other actions the acquisition workforce should take to procure generative AI solutions effectively. Many generative AI tools may already be available to agency staff in tools they use every day or through government cloud platforms they already have accounts on. And these tools may be available through professional service and system integrator contracts the agencies already have in place. In that way, the fastest acquisition may be no acquisition, or as simple as adding more “credits” to an existing cloud platform account. 

Before embarking on a large scale or complex new acquisition for generative AI tools, see if there is a simpler route. Work with your agency’s chief information officer, chief artificial intelligence officer, and chief information security officer to determine what you already have in place and whether you can just use an existing solution or contract.

Here are a few other recommendations in the guide:

  • Start with Your Agency’s Needs. Rather than starting with solutions and specifications, define the problem that the agency wants generative AI tools to help solve.
  • Scope and Test Solutions. Given the evolving nature of most generative AI tools, it is essential for agencies to use testbeds and sandboxes to try solutions before committing to large scale buys with too many unknowns about product performance.
  • Manage and Protect Data. Generative AI relies on data “inputs” to create content “outputs” so it is critical to know where data is coming from, what are its limitations and how data will be used and protected.
  • Control Costs. Generative AI is very often billed like other Software as a Service so usage costs can really grow quickly if not appropriately monitored and managed.

Acquisition staff also benefit from knowing what procurement actions their agency and others have already taken. You’ll also find a searchable data dashboard to give information about recent AI-related contract actions.

Specialized Computing Infrastructure

The guide also talks about “specialized computing infrastructure” per the AI EO. Specialized computing infrastructure can be thought of as the high-performance computers, powerful chips, software, networks and resources made specifically for building, training, fine-tuning, testing, using and maintaining artificial intelligence applications. Computing infrastructure can be on-premise, cloud based or a combination of both.

While most agencies will likely access generative AI tools through the cloud, some agencies may need to build some light specialized computing infrastructure to support their specific requirements.

This is the start.

The biggest challenge to producing any sort of guidance around a technology is anticipating and accommodating change. To do it, we organized a working group, gathered input from a wide array of acquisition specialists and technical experts, and collaborated with our IT Vendor Management Office to inform and support faster, smarter IT buying decisions across the federal community. We welcome your feedback at

Generative AI technology will continue to evolve. The risks and benefits will shift over time. Agencies will experiment with generative AI tools. And contracting officers will play a critical role by working closely with program and IT staff to find, source and acquire the right generative AI solutions for agencies’ needs. We hope the Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide helps the acquisition community enable their agencies to start to responsibly harness the power of this promising technology and better serve the American people.

Follow ITC on LinkedIn and subscribe for blog updates.

Procurement and the AI EO — Helping federal CAIOs navigate the path ahead

Recently, the White House issued Executive Order 14110 – Safe, Secure, and Trustworthy Artificial Intelligence. It’s the first governmentwide directive encouraging the responsible use of artificial intelligence.

Welcome CAIOs!

For many agencies, implementing EO 14110 means formalizing a new position: the Chief Artificial Intelligence Officer, who will drive the creation of each agency’s AI strategy and establish new governance. CAIOs will be tasked with implementing sophisticated risk management requirements so the projects they oversee comply with all applicable laws, regulations, and policies, including those addressing privacy, confidentiality, copyright, human and civil rights, and civil liberties.

In industry, companies of all shapes and sizes have brought on CAIOs to manage their workflows and augment their organizations’ skill sets. I’m encouraged to see their counterparts arrive in government, including our own at GSA, Zach Whitman.

So, to the AI specialists and leaders joining federal agency C-Suites, welcome! We at GSA’s Federal Acquisition Service are excited to help you get the tools you’ll need to accomplish your missions.

The work ahead

The promise of AI is incredible. The latest advancements in Large Language Models and Generative AI take a field that has been building up for more than 50 years to a new level. We can see agencies using AI to speed up workflows, improve how the public interacts with federal information, reveal new insights in our data, and improve how we design and deliver programs.

Over the next few months, CAIOs will work on strategies to drive innovation and manage the risks of AI. According to EO 14110, CAIOs will serve as the senior AI advisors to agency leadership and start weighing in on strategic decisions. You’ll work closely with Chief Information Officers and Chief Information Security Officers to set up the right safeguards for how the AI tools your teams and others within your agencies use will meet cybersecurity standards and best practices. Working together with leaders and staff throughout the organization, you may even prototype solutions that can illustrate the capabilities and risks of AI when delivering on your agency’s mission.

But wait, there’s more! You’ll also compile inventories, evaluate products, influence workforce development, prioritize projects, remove barriers, document use cases, assess performance, implement internal controls, and ensure your agency’s AI efforts comply with a host of existing laws and policies.

Time to prioritize

That is a big to-do list! To succeed, you may need outside resources like AI-centric development environments and hardware; SaaS providers who can provide access to AI modules; and early assistance from AI experts who can create custom AI solutions for specific purposes in your agency. You will also need to implement training for agency staff on how to use AI systems.

Several different GSA acquisition solutions can help CAIOs procure the AI products, services and solutions they need to achieve their missions. Here are a few:

  • GSA offers easy access to AI development tools from Federal Risk and Authorization Management Program (FedRAMP) – approved cloud service providers on the Multiple Award Schedule – IT Category.
  • Our Governmentwide Acquisition Contracts — Alliant 2, 8(a) STARS III, and VETS 2 — help agencies quickly and efficiently bring on IT service providers, some of whom can provide targeted AI services.
  • GSA’s Rapid Review report service scans the Multiple Award Schedule and provides a list of approved vendors that meet particular criteria, including common AI services from coding to training, typically in as little as one day. To get started, visit our Market Research as a Service page and order a Rapid Review.

Above all, remember that we’re here to facilitate the business of connecting you with the right technology solution. Contact us with your needs and we will guide you there.

Know the risks

EO 14110 provides the most comprehensive guidance to date on the necessity for agencies to fully consider the risks from their use of AI.

AI tools will be subject to rigorous assessment, testing, and evaluation before they may be used. After that, according to EO 14110, CAIOs must ensure that their AI systems undergo ongoing monitoring and human review, that emerging risks are identified quickly, that its operators are sufficiently trained, and that the AI functionality is documented in plain language for public awareness.

Importantly, EO 14110 charges CAIOs with ensuring their agency’s AI will advance equity, dignity, and fairness. This will require a mix of thoughtful stakeholder engagement and the sophisticated use of data and analytics to anticipate, assess, and mitigate disparate impacts. That includes being alert to factors that contribute to algorithmic discrimination or bias and proactively removing them.

We’re constantly calibrating the balance between convenience and compliance, which is particularly important when preparing to acquire technologies like AI that are new and evolving. Our contracts require vendors to comply with rules, policies, and regulations — including EO 14110 and the NIST AI Risk Management Framework — to ensure you have a safe, secure, sustainable IT infrastructure.

More to come

In 2020, GSA launched the AI Community of Practice to get practitioners from across government talking and sharing best practices, then set up an AI Center of Excellence to put their knowledge into action. Much of their work helped lay the intellectual infrastructure needed to carry out the governmentwide objectives of EO 14110. GSA itself is named in three:

  1. Develop and issue a framework for prioritizing critical and emerging technologies offerings in the FedRAMP authorization process, starting with generative AI.
  2. Facilitate access to governmentwide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce.
  3. Support the National AI Talent Surge by accelerating and tracking the hiring of AI and AI-enabling talent across the Federal Government through programs including the Presidential Innovation Fellows and the U.S. Digital Corps.

As you can see, there will be much more to come as the government’s AI strategy goes into action. To quote GSA Administrator Robin Carnahan, “GSA is proud to play key roles in supporting this Executive Order to help ensure the federal government leads the way in the responsible, effective use of AI.”

Follow ITC on LinkedIn and subscribe for blog updates.

What does the future of cybersecurity look like?

As we look ahead, there are several key areas of focus that will undoubtedly shape the virtual battleground. Government agencies who proactively embrace and implement current high priorities in these key areas will be better prepared to navigate the evolving digital threatscape and safeguard their sensitive information and assets. Here are some top drivers we anticipate will impact agencies’ cybersecurity strategy and spending plans.

Zero Trust Architecture (ZTA)

ZTA has been at the forefront of government guidance in recent years. Now that agencies have had time to plan for their ZTA requirements, implementing strategies should commence. ZTA provides agencies with the foundation to build a strong security posture that evolves with the ever-changing technological environment of dynamic and accelerating threats.

Cybersecurity Supply Chain Risk Management (C-SCRM)

The growing interconnectedness of systems, services, and products makes management and mitigation of supply chain risks even more important. Effective C-SCRM should be a fundamental component in cybersecurity strategy. Having C-SCRM as an essential element in procurement helps to ensure the resilience, security, and continuity of operations for organizations, government agencies, and critical infrastructure.

Post-Quantum Cryptography (PQC)

PQC is an emerging field within the cyber realm that is gaining increased relevance due to the potential threat quantum computers pose to traditional encryption methods. PQC involves the development of new cryptographic algorithms resistant to quantum computer attacks to ensure the security of digital communications and sensitive information. Agencies should begin to plan for future quantum resistant methods by inventorying their systems and engaging with vendors on how they are addressing quantum-readiness.

Some challenges agencies may face include:

  • The ability to identify PQ-vulnerable systems.
  • The ability to identify and implement appropriate PQC algorithms.
  • The high cost and complexity of implementation.
  • A gap in a trained and certified workforce to implement and maintain PCQ algorithms.

Artificial Intelligence (AI)

The rapid emergence and adoption of generative AI tools has created new challenges, especially for data security. As AI becomes more prevalent in our modern technology, agencies will need to assess the associated risks and develop strategies to mitigate vulnerabilities.

GSA and other agencies are working to support the new Executive Order to help ensure that AI systems are safe, secure, and trustworthy.

Follow ITC on LinkedIn and subscribe for blog updates.