How Safe Is Your AI?

AI is racing into small and medium businesses through marketing tools, research platforms, chatbots, email systems, and internal automations, often faster than leadership can keep up. In this conversation, Tower 23 IT founder Scott Cooper explains how “Shadow AI” emerges when well meaning employees plug sensitive data into free tools, connect systems through APIs, or rely on AI generated output without oversight. He shares why an AI acceptable use policy is now essential, what data should never be fed into public models, and how enterprise grade tools like Microsoft Copilot or ChatGPT Enterprise can reduce risk. Scott also unpacks real world examples from contract reviews gone wrong to deepfake style voice scams, and lays out a simple principle for leaders who want to use AI safely: keep humans on the front end and back end of any AI powered process.

Blog Key Takeaways

  1. Start with an AI acceptable use policy that defines purpose, allowed tools, prohibited data types, and any departments that should restrict AI use.
  2. Treat free or consumer AI tools as higher risk, especially when dealing with contracts, pricing models, personal data, or confidential business information.
  3. Monitor data flows and integrations, including APIs and tools like Zapier or Make, to understand where AI is being called and what is being sent.
  4. Train employees that AI output is not automatically trustworthy and must be checked for hallucinations, accuracy, and compliance before being used.
  5. Explore frameworks like the NIST AI Risk Management Framework to guide leadership roles, accountability, and governance around AI adoption.