Smart Business Tips
Sign In
  • Home
  • Business
    • Business Coaching
    • Business Growth
    • Business Tools & Apps
  • Entrepreneurship
    • Entrepreneurs
    • Crypto
    • Innovation
    • Investing
    • Leadership
    • Productivity
  • Contact US
    • Blog
  • Branding
    • Content Marketing
    • Digital Marketing
    • E-commerce
    • Marketing Strategies
    • Personal Finance
  • Sales
    • Small Business Tips
    • Social Media
    • Startups
    • Tech Trends
    • Investing
  • Shop
Notification
Essential tips for a successful trip to the Himalayas
Innovation

Essential tips for a successful trip to the Himalayas

5 Effective Strategies for Conflict Resolution in the Workplace
Small Business Tips

5 Effective Strategies for Conflict Resolution in the Workplace

5 High-Impact Ways to Integrate Traditional & Digital Marketing
Content Marketing

5 High-Impact Ways to Integrate Traditional & Digital Marketing

10 Essential Leadership Training Programs to Elevate Skills
Small Business Tips

10 Essential Leadership Training Programs to Elevate Skills

Font ResizerAa
Smart Business TipsSmart Business Tips
  • Home
  • Business
  • Entrepreneurship
  • Contact US
  • Branding
  • Sales
  • Shop
Search
  • Home
  • Business
    • Business Coaching
    • Business Growth
    • Business Tools & Apps
  • Entrepreneurship
    • Entrepreneurs
    • Crypto
    • Innovation
    • Investing
    • Leadership
    • Productivity
  • Contact US
    • Blog
  • Branding
    • Content Marketing
    • Digital Marketing
    • E-commerce
    • Marketing Strategies
    • Personal Finance
  • Sales
    • Small Business Tips
    • Social Media
    • Startups
    • Tech Trends
    • Investing
  • Shop
Sign In Sign In
Follow US
Made by ThemeRuby using the Foxiz theme. Powered by WordPress
Smart Business Tips > Blog > Tech Trends > Inside the US Government’s Unpublished Report on AI Safety
Tech Trends

Inside the US Government’s Unpublished Report on AI Safety

Admin45
Last updated: August 6, 2025 9:43 pm
By
Admin45
4 Min Read
Inside the US Government’s Unpublished Report on AI Safety
SHARE


At a computer security conference in Arlington, Virginia, last October, a few dozen AI researchers took part in a first-of-its-kind exercise in “red teaming,” or stress-testing a cutting-edge language model and other artificial intelligence systems. Over the course of two days, the teams identified 139 novel ways to get the systems to misbehave including by generating misinformation or leaking personal data. More importantly, they showed shortcomings in a new US government standard designed to help companies test AI systems.

The National Institute of Standards and Technology (NIST) didn’t publish a report detailing the exercise, which was finished toward the end of the Biden administration. The document might have helped companies assess their own AI systems, but sources familiar with the situation, who spoke on condition of anonymity, say it was one of several AI documents from NIST that were not published for fear of clashing with the incoming administration.

“It became very difficult, even under [president Joe] Biden, to get any papers out,” says a source who was at NIST at the time. “It felt very like climate change research or cigarette research.”

Neither NIST nor the Commerce Department responded to a request for comment.

Before taking office, President Donald Trump signaled that he planned to reverse Biden’s Executive Order on AI. Trump’s administration has since steered experts away from studying issues such as algorithmic bias or fairness in AI systems. The AI Action plan released in July explicitly calls for NIST’s AI Risk Management Framework to be revised “to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.”

Ironically, though, Trump’s AI Action plan also calls for exactly the kind of exercise that the unpublished report covered. It calls for numerous agencies along with NIST to “coordinate an AI hackathon initiative to solicit the best and brightest from US academia to test AI systems for transparency, effectiveness, use control, and security vulnerabilities.”

The red-teaming event was organized through NIST’s Assessing Risks and Impacts of AI (ARIA) program in collaboration with Humane Intelligence, a company that specializes in testing AI systems saw teams attack tools. The event took place at the Conference on Applied Machine Learning in Information Security (CAMLIS).

The CAMLIS Red Teaming report describes the effort to probe several cutting edge AI systems including Llama, Meta’s open source large language model; Anote, a platform for building and fine-tuning AI models; a system that blocks attacks on AI systems from Robust Intelligence, a company that was acquired by CISCO; and a platform for generating AI avatars from the firm Synthesia. Representatives from each of the companies also took part in the exercise.

Participants were asked to use the NIST AI 600-1 framework to assess AI tools. The framework covers risk categories including generating misinformation or cybersecurity attacks, leaking private user information or critical information about related AI systems, and the potential for users to become emotionally attached to AI tools.

The researchers discovered various tricks for getting the models and tools tested to jump their guardrails and generate misinformation, leak personal data, and help craft cybersecurity attacks. The report says that those involved saw that some elements of the NIST framework were more useful than others. The report says that some of NIST’s risk categories were insufficiently defined to be useful in practice.



Source link

Join Our Newsletter
Subscribe to our newsletter to get our newest articles instantly!
Share This Article
Facebook Email Copy Link
Leave a Comment Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recipe Rating




Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow
Ad image

You Might Also Like

Legal software company Clio drops B on law data giant vLex
Tech Trends

Legal software company Clio drops $1B on law data giant vLex

By
Admin45
July 1, 2025
The 7 Best Prime Day Action Camera Deals for Thrill Seekers (2025)
Tech Trends

The 7 Best Prime Day Action Camera Deals for Thrill Seekers (2025)

By
Admin45
July 11, 2025
OpenAI agreed to pay Oracle B a year for data center services
Tech Trends

OpenAI agreed to pay Oracle $30B a year for data center services

By
Admin45
July 22, 2025
19 Best Back-to-School Deals for 2025
Tech Trends

19 Best Back-to-School Deals for 2025

By
Admin45
August 16, 2025
The Fairphone (Gen. 6) Review: Better Than Ever
Tech Trends

The Fairphone (Gen. 6) Review: Better Than Ever

By
Admin45
August 13, 2025
The 12 Best Prime Day Purchases Our Gear Team Made
Tech Trends

The 12 Best Prime Day Purchases Our Gear Team Made

By
Admin45
July 11, 2025

SmartBusinessTips

  • Business Tools & Apps
  • Marketing Strategies
  • Social Media
  • Tech Trends
  • Branding
  • Business
  • Crypto
  • Sales
  • About Us
  • Privacy Policy
  • Member Login
  • Contact Us
  • Business Coaching
  • Business Growth
  • Content Marketing
  • Branding

@Smartbusinesstips Copyright-2025-2027 Content.

Don't not sell my personal information
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?

Not a member? Sign Up