Procurement and the AI EO — Helping federal CAIOs navigate the path ahead

Recently, the White House issued Executive Order 14110 – Safe, Secure, and Trustworthy Artificial Intelligence. It’s the first governmentwide directive encouraging the responsible use of artificial intelligence.

Welcome CAIOs!

For many agencies, implementing EO 14110 means formalizing a new position: the Chief Artificial Intelligence Officer, who will drive the creation of each agency’s AI strategy and establish new governance. CAIOs will be tasked with implementing sophisticated risk management requirements so the projects they oversee comply with all applicable laws, regulations, and policies, including those addressing privacy, confidentiality, copyright, human and civil rights, and civil liberties.

In industry, companies of all shapes and sizes have brought on CAIOs to manage their workflows and augment their organizations’ skill sets. I’m encouraged to see their counterparts arrive in government, including our own at GSA, Zach Whitman.

So, to the AI specialists and leaders joining federal agency C-Suites, welcome! We at GSA’s Federal Acquisition Service are excited to help you get the tools you’ll need to accomplish your missions.

The work ahead

The promise of AI is incredible. The latest advancements in Large Language Models and Generative AI take a field that has been building up for more than 50 years to a new level. We can see agencies using AI to speed up workflows, improve how the public interacts with federal information, reveal new insights in our data, and improve how we design and deliver programs.

Over the next few months, CAIOs will work on strategies to drive innovation and manage the risks of AI. According to EO 14110, CAIOs will serve as the senior AI advisors to agency leadership and start weighing in on strategic decisions. You’ll work closely with Chief Information Officers and Chief Information Security Officers to set up the right safeguards for how the AI tools your teams and others within your agencies use will meet cybersecurity standards and best practices. Working together with leaders and staff throughout the organization, you may even prototype solutions that can illustrate the capabilities and risks of AI when delivering on your agency’s mission.

But wait, there’s more! You’ll also compile inventories, evaluate products, influence workforce development, prioritize projects, remove barriers, document use cases, assess performance, implement internal controls, and ensure your agency’s AI efforts comply with a host of existing laws and policies.

Time to prioritize

That is a big to-do list! To succeed, you may need outside resources like AI-centric development environments and hardware; SaaS providers who can provide access to AI modules; and early assistance from AI experts who can create custom AI solutions for specific purposes in your agency. You will also need to implement training for agency staff on how to use AI systems.

Several different GSA acquisition solutions can help CAIOs procure the AI products, services and solutions they need to achieve their missions. Here are a few:

  • GSA offers easy access to AI development tools from Federal Risk and Authorization Management Program (FedRAMP) – approved cloud service providers on the Multiple Award Schedule – IT Category.
  • Our Governmentwide Acquisition Contracts — Alliant 2, 8(a) STARS III, and VETS 2 — help agencies quickly and efficiently bring on IT service providers, some of whom can provide targeted AI services.
  • GSA’s Rapid Review report service scans the Multiple Award Schedule and provides a list of approved vendors that meet particular criteria, including common AI services from coding to training, typically in as little as one day. To get started, visit our Market Research as a Service page and order a Rapid Review.

Above all, remember that we’re here to facilitate the business of connecting you with the right technology solution. Contact us with your needs and we will guide you there.

Know the risks

EO 14110 provides the most comprehensive guidance to date on the necessity for agencies to fully consider the risks from their use of AI.

AI tools will be subject to rigorous assessment, testing, and evaluation before they may be used. After that, according to EO 14110, CAIOs must ensure that their AI systems undergo ongoing monitoring and human review, that emerging risks are identified quickly, that its operators are sufficiently trained, and that the AI functionality is documented in plain language for public awareness.

Importantly, EO 14110 charges CAIOs with ensuring their agency’s AI will advance equity, dignity, and fairness. This will require a mix of thoughtful stakeholder engagement and the sophisticated use of data and analytics to anticipate, assess, and mitigate disparate impacts. That includes being alert to factors that contribute to algorithmic discrimination or bias and proactively removing them.

We’re constantly calibrating the balance between convenience and compliance, which is particularly important when preparing to acquire technologies like AI that are new and evolving. Our contracts require vendors to comply with rules, policies, and regulations — including EO 14110 and the NIST AI Risk Management Framework — to ensure you have a safe, secure, sustainable IT infrastructure.

More to come

In 2020, GSA launched the AI Community of Practice to get practitioners from across government talking and sharing best practices, then set up an AI Center of Excellence to put their knowledge into action. Much of their work helped lay the intellectual infrastructure needed to carry out the governmentwide objectives of EO 14110. GSA itself is named in three:

  1. Develop and issue a framework for prioritizing critical and emerging technologies offerings in the FedRAMP authorization process, starting with generative AI.
  2. Facilitate access to governmentwide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce.
  3. Support the National AI Talent Surge by accelerating and tracking the hiring of AI and AI-enabling talent across the Federal Government through programs including the Presidential Innovation Fellows and the U.S. Digital Corps.

As you can see, there will be much more to come as the government’s AI strategy goes into action. To quote GSA Administrator Robin Carnahan, “GSA is proud to play key roles in supporting this Executive Order to help ensure the federal government leads the way in the responsible, effective use of AI.”

Follow ITC on LinkedIn and subscribe for blog updates.

What does the future of cybersecurity look like?

As we look ahead, there are several key areas of focus that will undoubtedly shape the virtual battleground. Government agencies who proactively embrace and implement current high priorities in these key areas will be better prepared to navigate the evolving digital threatscape and safeguard their sensitive information and assets. Here are some top drivers we anticipate will impact agencies’ cybersecurity strategy and spending plans.

Zero Trust Architecture (ZTA)

ZTA has been at the forefront of government guidance in recent years. Now that agencies have had time to plan for their ZTA requirements, implementing strategies should commence. ZTA provides agencies with the foundation to build a strong security posture that evolves with the ever-changing technological environment of dynamic and accelerating threats.

Cybersecurity Supply Chain Risk Management (C-SCRM)

The growing interconnectedness of systems, services, and products makes management and mitigation of supply chain risks even more important. Effective C-SCRM should be a fundamental component in cybersecurity strategy. Having C-SCRM as an essential element in procurement helps to ensure the resilience, security, and continuity of operations for organizations, government agencies, and critical infrastructure.

Post-Quantum Cryptography (PQC)

PQC is an emerging field within the cyber realm that is gaining increased relevance due to the potential threat quantum computers pose to traditional encryption methods. PQC involves the development of new cryptographic algorithms resistant to quantum computer attacks to ensure the security of digital communications and sensitive information. Agencies should begin to plan for future quantum resistant methods by inventorying their systems and engaging with vendors on how they are addressing quantum-readiness.

Some challenges agencies may face include:

  • The ability to identify PQ-vulnerable systems.
  • The ability to identify and implement appropriate PQC algorithms.
  • The high cost and complexity of implementation.
  • A gap in a trained and certified workforce to implement and maintain PCQ algorithms.

Artificial Intelligence (AI)

The rapid emergence and adoption of generative AI tools has created new challenges, especially for data security. As AI becomes more prevalent in our modern technology, agencies will need to assess the associated risks and develop strategies to mitigate vulnerabilities.

GSA and other agencies are working to support the new Executive Order to help ensure that AI systems are safe, secure, and trustworthy.

Follow ITC on LinkedIn and subscribe for blog updates.