Connect with us

News

U.S. Standards and Tech Group Seeks Public Input on AI Development Guidelines

Published

on

The American National Standards Institute (ANSI) and the U.S. Technology Policy Committee (USTPC) have issued a public call for input on developing guidelines for artificial intelligence (AI) safety and security.

The U.S. National Institute of Standards and Technology (NIST) is actively seeking input from both AI companies and the public on risks associated with generative AI and misinformation. These risks encompass creating fake images, videos, audio, and the potential for AI systems to generate false or misleading information.

Within the U.S. Department of Commerce, NIST has issued a request for information, aligned with the recent presidential executive order, focusing on the secure and responsible development and utilization of artificial intelligence (AI).

To gather important feedback for testing AI systems, NIST is encouraging public input by February 2, 2024, to develop robust testing methods ensuring AI system safety.

Secretary Raimondo highlighted the NIST’s initiative based on President Biden’s October executive order, directing NIST to develop guidelines for AI system evaluation, foster consensus-based standards, and create testing environments for AI system assessment.

The NIST framework aims to provide a foundation for the safe, reliable, and responsible development of AI technologies.

NIST seeks input from AI companies and the public on two specific topics: generative AI risk management and reducing the risks of AI-generated misinformation.

Generative AI, with its potential for creativity and problem-solving, raises concerns about misuse, including generating fake content and engaging in discriminatory behavior.

In addition to misuse risks, concerns also exist about generative AI disrupting the job market, interfering with elections, and potentially surpassing human capabilities with catastrophic consequences.

The NIST request aims to gather information about optimal domains for conducting “red-teaming” in AI risk evaluation and formulating guidelines for best practices.

Red-teaming, originating from Cold War simulations, involves a group simulating potential adversarial scenarios to assess vulnerabilities. It has been utilized in the cybersecurity field as “penetration testing” to find security vulnerabilities exploitable by real-world attackers.

In August 2022, a public evaluation red-teaming event was held at the DEF CON cybersecurity conference, organized by AI Village, SeedAI, and Humane Intelligence.

In November 2022, NIST announced the formation of the AI Consortium to develop AI standards and guidelines, releasing a formal request for applicants to join. The AI Consortium focuses on creating policies and metrics for human-centered AI, guiding its safe, ethical, and responsible development.

 

Read also: Why Rollups present a unique business model in crypto: Insights from Galaxy Ventures Exec

 

0 0 votes
Article Rating
Continue Reading
Advertisement Earnathon.com
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Crypto News Update

Latest Episode on Inside Blockchain

Crypto Street

Advertisement



Trending

ALL Sections

Recent Posts

0
Would love your thoughts, please comment.x
()
x