US technology policy must keep pace with AI innovation

Photo of author

By Webdesk


As artificial intelligence (AI) innovation surpasses news cycles and captures public attention, a framework for its responsible and ethical development and use has become increasingly important to ensure this unprecedented wave of technology reaches its full potential as a positive contribution to economic and social progress.

The European Union has already worked on laws on responsible AI; Almost two years ago I shared my opinion on those initiatives. Then the AI ​​law was, as it is called, “an objective and measured approach to innovation and societal considerations”. Today, leaders from tech companies and the United States government are coming together to map out a unified vision for responsible AI.

The power of generative AI

OpenAI’s release of ChatGPT last year captured the imagination of technology innovators, business leaders and the public, and consumer interest and understanding of the possibilities of generative AI exploded. However, with artificial intelligence becoming mainstream, including as a political issue, and people’s propensity to experiment and test systems, the capacity for misinformation, the impact on privacy, and the risk to cybersecurity and fraudulent behavior are quickly at risk. to become secondary.

In an early effort to address these potential challenges and ensure responsible AI innovation that protects the rights and safety of Americans, the White House has announced new actions to promote responsible AI.

In a fact sheet released by the White House last week, the Biden-Harris administration outlined three actions to “promote responsible U.S. artificial intelligence (AI) innovation and protect people’s rights and safety.” These include:

  • New investments to enable responsible US AI R&D.
  • Public reviews of existing generative AI systems.
  • Policies to ensure the US government leads by example in mitigating AI risks and leveraging AI opportunities.

New investments

In terms of new investment, the $140 million in funding from the National Science Foundation to launch seven new National AI Research Institutes pales in comparison to what has been raised by private companies.

While the US government’s investment in AI is in the right direction, it is generally microscopic compared to other countries’ government investment, namely China, which started investing in 2017. There is an immediate opportunity to increase the impact of investments through academic partnerships for staff development and research. The government should fund AI centers alongside academic and business institutions that are already at the forefront of AI research and development, driving innovation and creating new opportunities for companies with the power of AI.

The collaborations between AI centers and top academic institutions, such as MIT’s Schwarzman College and Northeastern’s Institute for Experiential AI, help bridge the gap between theory and practical application by bringing together experts from academia, industry, and government to collaborating on groundbreaking research and development projects that have real-world applications. By partnering with large enterprises, these centers can help companies better integrate AI into their operations, improving efficiency, cost savings and better consumer outcomes.

In addition, these centers help educate the next generation of AI experts by providing students with access to state-of-the-art technology, hands-on experience on real-world projects, and mentorship from industry leaders. By taking a proactive and collaborative approach to AI, the US government can help shape a future where AI enhances rather than replaces human work. As a result, all members of society can benefit from the opportunities offered by this powerful technology.

Public Reviews

Model evaluation is critical to ensuring that AI models are accurate, reliable, and bias-free, which is essential for successful implementation in real-world applications. For example, imagine an urban design use case where generative AI is trained on red-outlined cities with historically underrepresented poor populations. Unfortunately, it only leads to more of the same. The same goes for lending bias as more financial institutions use AI algorithms to make credit decisions.

If these algorithms are trained on data that discriminate against certain demographic groups, they can falsely deny loans to those groups, leading to economic and social inequalities. While these are just a few examples of bias in AI, it should remain at the top of the agenda no matter how quickly new AI technologies and techniques are developed and deployed.

To combat bias in AI, the administration announced a new opportunity for model review at DEFCON 31 AI Village, a forum for researchers, practitioners and enthusiasts to come together and explore the latest advances in artificial intelligence and machine learning . The model review is a collaborative initiative with some of the key players in the space, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI, and Stability AI, leveraging a platform provided by Scale AI.

In addition, it will measure how the models align with the principles and practices outlined in the Biden-Harris administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This is a positive development as the administration connects directly with enterprises and benefits from the expertise of tech leaders in the space, who have become business AI labs.

Government policy

With regard to the third policy action to ensure the US government leads by example in mitigating AI risks and leveraging AI opportunities, the Office of Management and Budget will issue policy guidance on the use of AI Systems by the US Government for Public Comment. Again, no timeline or details for this policy have been given, but an executive order on racial equality issued earlier this year is expected to come to the fore.

The executive order contains a provision directing government agencies to use AI and automated systems in a manner that promotes equity. For this policy to have a meaningful impact, it must contain incentives and repercussions; they cannot be mere optional guidance. For example, NIST standards for security are effective requirements for implementation by most government agencies. To say the least, failure to comply is incredibly embarrassing for those involved and grounds for personnel action in some parts of the government. Government AI policies, whether or not as part of NIST, must be similar to be effective.

Moreover, the cost of complying with such regulations should not be a barrier to startup-driven innovation. For example, what can be achieved in a framework where regulatory compliance costs depend on the size of the company? Finally, as the government becomes a major consumer of AI platforms and tools, it is paramount that its policies become the guideline for building such tools. Make adherence to these guidelines a literal or even effective purchase requirement (e.g., the FedRamp security standard), and this policy can move the needle.

As generative AI systems become more powerful and widespread, it is essential that all stakeholders – including founders, operators, investors, technologists, consumers and regulators – are thoughtful and intentional in pursuing and engaging with these technologies. While Generative AI and AI more broadly have the potential to revolutionize industries and create new opportunities, it also poses significant challenges, particularly in terms of bias, privacy and ethical considerations.

Therefore, all stakeholders must prioritize transparency, accountability and collaboration to ensure that AI is developed and used responsibly and profitably. This means investing in ethical AI research and development, collaborating with diverse perspectives and communities, and establishing clear guidelines and regulations for developing and deploying these technologies.



Source link

Share via
Copy link