Several months ago, we wrote an article regarding the state of Vermont’s developing AI policies. They have emerged as one of the more proactive US states, continuously refining their approach to AI governance. Building upon the foundation laid last year, the state has implemented significant updates to its AI guidelines, reflecting a commitment to responsible innovation and ethical AI use. In this article we’ll examine the original policies, recent updates, and their implications for AI utilization in Vermont, with a focus on ethics, risks, and mitigation strategies. At the end we’ve assembled some sample policies that may be adapted for use in an organizational setting.
Original Policies
Vermont’s initial AI guidelines, established in 2023, centered around a comprehensive Code of Ethics for AI systems within state government. These guidelines emphasized transparency, explainability, accountability, and fairness. It mandated that AI systems be designed to support human-led processes, with a strong focus on maintaining public trust.
The state also created two collaborative bodies: the Council on Artificial Intelligence and the Division of Artificial Intelligence within the Vermont Digital Services Agency. These entities were tasked with oversight, direction, and operationalization of AI guidance within Vermont.
Recent Updates and Expansions
Vermont has significantly broadened its AI governance framework since the initial guidelines were published. One of the most notable changes is the expansion of the AI Council and Division’s scope beyond state government. These bodies now serve as hubs for AI best practices and education across various sectors, collecting, creating, and sharing governance frameworks for AI use.
A major shift in policy now allows state employees to use Generative AI for official duties, subject to specific limitations. This includes the use of free Generative AI tools for tasks such as research, document creation, and correspondence. However, the policy strictly prohibits the input of non-public information into publicly available AI tools and places the responsibility for ensuring the accuracy of AI-generated content on the employees.
Vermont has also identified four key trends expected to have an impact on workforces:
1. Increased human/machine collaboration
2. A shift towards work requiring flexibility, creativity, and critical thinking
3. Broader access to information and specialized expertise
4. Decrease in rote decision-making by humans

Sample Policies and Guidelines
Using Vermont’s AI governance framework, we have assembled a high-level guide for AI users in a business setting. We outline best practices for responsible implementation of AI systems, providing a general guide for organizations to navigate AI regulation and ethical use.
Establishing Governance Structures
Create robust internal governance mechanisms for effective AI oversight. This guarantees that AI implementation is in harmony with organizational goals and values, while offering a clear framework for decision-making and accountability:
– Form working groups with AI experts, business leaders, and key stakeholders
– Define business use cases for AI systems
– Assign roles and responsibilities
– Enforce accountability measures
– Assess outcomes regularly
Ethical Considerations
Develop and adhere to a code of ethics for AI systems to maintain public trust and prevent potential harm. This helps organizations navigate complex ethical dilemmas and ensures AI systems respect human dignity and rights:
– Prioritize human-centered use that recognizes individual dignity and value
– Comply with relevant laws and regulations
– Maintain individual rights and privacy
– Update the code of ethics annually
AI System Usage Guidelines
Clear guidelines for AI system usage to prevent misuse and protect sensitive information. These guidelines enable employees to use AI tools responsibly, balancing innovation with necessary precautions:
– Obtain appropriate approvals before using AI for official business
– Use free AI tools for tasks like research and document creation
– Avoid inputting non-public information into publicly available AI tools
– Verify the accuracy of AI-generated content
– Follow established procurement processes for paid AI services
Risk Management and Mitigation
Strategies to manage AI-related risks for long-term sustainability and legal compliance. This proactive approach helps organizations identify and address potential issues before they escalate into significant problems:
– Conduct regular risk assessments for AI systems
– Develop risk management policies
– Perform impact assessments, especially for high-risk AI systems
– Create public disclosure protocols for AI use and impacts
Transparency and Accountability
Grow trust through transparent AI practices to maintaining stakeholder confidence and meeting regulatory requirements. This openness also facilitates better understanding and acceptance of AI systems within the organization and among the public:
– Maintain clear documentation of AI systems and their decision-making processes
– Implement audit trails for AI-driven decisions
– Communicate openly about AI use, development, and data sources
– Consider using “regulatory sandboxes” to test AI applications in controlled environments
Data Governance and Privacy
Protect data integrity and individual privacy. Robust data governance practices help organizations comply with privacy regulations and maintain ethical standards in data usage:
– Establish clear data governance protocols
– Classify data based on sensitivity
– Practice data minimization by collecting only necessary information
– Implement strict access controls for sensitive data
– Conduct privacy impact assessments regularly
Continuous Monitoring and Improvement
Maintain continuous oversight of AI systems for continued effectiveness and alignment with organizational goals. This adaptive approach allows for timely adjustments and improvements to AI systems as technologies and circumstances evolve:
– Develop a visual dashboard for real-time updates on AI system health
– Implement health score metrics for easy monitoring
– Set up automated monitoring for bias, drift, and anomalies
– Create performance alerts for deviations from predefined parameters
– Define custom metrics aligned with organizational KPIs
Stakeholder Engagement
Involve diverse perspectives in AI governance for comprehensive and inclusive AI strategies. This collaborative approach helps address potential blind spots and ensures AI systems meet the needs of all affected parties:
– Collaborate across departments and areas of expertise
– Engage end users, employees, and community members in AI discussions
– Provide regular updates on AI initiatives and their impacts
Compliance and Reporting
Alignment with regulatory requirements to avoid legal issues and maintain organizational credibility. Regular reporting and reviews help organizations adapt to evolving AI regulations and demonstrate their commitment to responsible AI use:
– Conduct annual reviews of AI use within the organization
– Prepare reports on AI impacts, including effects on privacy and potential discrimination
– Stay informed about evolving AI regulations and adjust practices accordingly
Responsible AI implementation requires a comprehensive approach that balances innovation with ethical considerations. We hope that these high level guidelines will help your organization begin to harness the benefits of AI while mitigating risks and maintaining public trust.