How the Government Should Think About AI

Artificial Intelligence (AI) is one of the most transformative technologies of our time. While the full berth of AI applications and limitations is beyond the scope of this blog, we’ve outlined four key considerations that the federal government must understand: The human touch, real-world risk, the evolution of AI, and techno-ethical standards. 

Solution and a Problem 

AI has the potential to revolutionize nearly every industry from healthcare to transportation, and its impact on society and the economy will only continue to grow in the future. Still, most leaders understand there is no such thing as a silver bullet solution. Looking at the transformation of technology over the last 30-50 years, it’s clear that every breakthrough comes with a certain level of technical debt, regulatory blind spots, and a need for collaboration between the private and public sectors. Exploring AI for its ability to analyze large data sets is probably the biggest point of opportunity for the federal government. Policy making around healthcare, military readiness, human resource allocation, and cybersecurity can be evidence-driven in a way that’s been inaccessible with human-only analysts.  At the same time, a robot-only approach has the potential to create discriminatory outcomes, and infringement on civil liberties, raise questions about transparency, result in job displacement for federal employees, and exacerbate our existing inequalities.  


The Human Touch  

Government agencies should focus on using AI to augment human capabilities rather than replace them altogether. Human-exclusive skills such as empathy, creativity, and critical thinking are needed in conjunction with technology. The algorithms of AI are only as strong as their input, so humans are a vital component of the equation. As AI advances, it will be humans who evaluate the civil consequences of their application. The focus should always circle back to how do we best serve the American [human] citizens.   


Real World Risk  

As these deep learning machines refine their output capabilities, the exact factors of their output become more shrouded. Even with something as familiar as the Google search engine, “why” the algorithm promotes one link over another isn’t completely understood. At best, Google has known factors and gives merchants guidance accordingly. Similarly, agencies will find themselves relying on AI-generated conclusions without fully knowing why. This leaves a huge gap in accountability. We’ve already seen issues with facial recognition technology. The issues of bias and built-in prejudices will continue to grow as more designers and manufacturers experiment with AI.  Predictive analyses and automated responses require robust parameters and quality assurance standards. On a mass scale managing global issues, these challenges can become dangerous and uncontrollable, and that’s based on known issues. As AI progresses, it’s likely there will be risks and challenges unforeseen.  


Continual Evolution 

As a leading superpower, it is the duty of the federal government to also lead in AI research and development. AI is a hot global interest and has applications in every facet of society. In the major areas of defense, health, and education the federal government cannot afford to fall behind. Investment in think tanks, contract opportunities, and private/public partnerships will prove to be fruitful in this area. Agencies can stay curious by fostering industry conversations with those who are on the front lines of AI development. Also, there are trusted contractors willing to give clear explanations and working definitions of AI-related terms. Basics like understanding machine learning, natural language processing, and general artificial intelligence will shape internal-agency conversations appropriately. Investing in the research will enable economic growth, prevent stifling, and empower innovation.  


Techno-Ethical Standards 

Techno-ethics refer to moral, legal, and social issues surrounding technology. These standards can start with the industry leaders, but ultimately a regulatory framework will be needed to guide the proliferation of AI. Some issues, like data use and privacy, are already at the forefront of policy consideration. Many deep learning models are fed with information from the Internet. How companies should account for this use of data is a point of policy creation. Intersecting with data use is the modern interpretation of copyright and intellectual property. Other considerations include transparency requirements, bias mitigation, oversight standards, and ethical review bodies for various AI applications. This level of guidance must come from the federal level, to be further refined at the state level. Without these techno-ethical standards, AI truly will be the wild west- fraught with innovation and human rights violations.  


The AI Revolution is Here 

With the hype over Chat GPT, it feels like AI came overnight, but agencies and industry recognize this revolution has been a long time coming. Its paradigm-shifting impact can be compared to the introduction of the Internet; the applications give users every day, everywhere assistance. As AI matures, it will have a significant impact on current and future employees, both in terms of the types of jobs available and the skills required to perform those jobs. From an education perspective, government agencies should incentivize education and training programs that prepare workers for the new economy.  

These considerations provide a starting point for the larger conversation that industry and government need to continue. By recognizing the strengths and limitations of AI, the federal government can help ensure AI is used in a manner that benefits generations to come.