New Data Reveals Widespread, Risky AI Use at Work

    August 5, 2025
    2 min read
    Featured image for New Data Reveals Widespread, Risky AI Use at Work

    In July 2025, Anagram conducted a national survey to understand how employees are using AI tools like ChatGPT, Gemini, and CoPilot at work. The results reveal a rapidly growing adoption of AI alongside behaviors that could put organizations at significant risk.

    Who participated in the survey?

    We surveyed 500 full-time employees across industries and regions in the United States. Participants represented a wide range of ages, roles, and income levels, giving us a look into how today’s workforce interacts with AI on the job.

    Key findings: How employees are using AI tools

    • 78% of employees are already using AI tools at work — even when their companies haven’t set clear policies.

    • 58% admit to pasting sensitive data into large language models, including client records, financial data, and internal documents.

    • 45% say they’ve used banned AI tools on the job.

    • 40% would knowingly violate company policy to finish a task faster.

    Why these findings matter

    The release of this data comes at a critical time: the Cybersecurity and Infrastructure Security Agency (CISA) is facing major budget cuts and workforce reductions, reducing the federal government’s ability to lead large-scale cybersecurity awareness campaigns.

    “With government resources shrinking, private companies must take on a bigger role in securing their networks and educating their teams,” said Harley Sugarman, Founder & CEO of Anagram. “Our survey makes it clear: employees are willing to trade compliance for convenience. That should be a wake-up call.”

    How companies can respond

    To address these risks, organizations need modern, engaging security awareness training that moves beyond outdated once-a-year sessions.

    Anagram’s human-driven training platform provides:

    • Bite-sized videos that fit into employees’ workflows

    • Interactive puzzles and exercises to reinforce learning

    • Frequent training that evolves with emerging threats

    And here’s the difference: we don’t treat employees as the problem. Our survey showed many workers feel guilty about risky AI use, but the truth is, they’re in the best position to protect your organization. That’s why Anagram gives them the tools, context, and confidence to be part of the solution. 

    When employees feel like defenders, not scapegoats, companies can finally close the gap between policy and behavior.

    Get the full report

    View the full survey here or book a demo to learn how Anagram can help your company strengthen its human security layer.