Skip to main content

Oregon State Flag An official website of the State of Oregon »

Artificial Intelligence

Artificial Intelligence

Oregon’s Artificial Intelligence (AI) Program establishes the foundation for responsible AI use across state government.

The program builds on the recommendations of Oregon’s AI Advisory Council and early agency experience, focusing on practical risk management, workforce readiness, and clear expectations for responsible AI use within existing enterprise processes.

Initial efforts prioritize low-risk use cases and the development of an AI Policy and Implementation Guide to support more complex applications over time. This work is intended to help agencies make clear, defensible decisions about when AI is appropriate and how it should be governed.

The program supports agency adoption and does not replace agency authority or existing approval processes.

Out for Comment - Responsible AI Usage Policy Package

Please provide comments on the EIS Policy page.  Comment perid closes May 18, 2026.



Current guidance and policies

Please find current guidance, policy documents, and directives on the EIS Statewide IT Policies, Procedures, and Guidance page. 

Training and Collaboration Tools for State Employees

*Requires M365 Profiles and Workday access


Frequently Asked Questions about AI for State Agencies and Employees

This FAQ provides guidance to help agencies understand requirements and expectations for responsible AI and data use.

Answer:
Microsoft Copilot Chat is the recommended tool for use by state employees and it is available to all staff through M365. An enhanced version of Copilot called M365 Copilot is also approved and available with an additional agency-approved license and train​ing.

Answer:
No. Copilot does not give anyone new access; it follows the same permissions and protections as Teams, Outlook, and SharePoint.​

Answer:
You may use these tools for normal web searching. Do not enter non-public state data (Level 2+) or use them for writing, editing, or summarization for work.Only approved tools like Copilot should be used for that.​

Answer:
Currently Copilot is the recommended and approved GenAI tool for general use by state employees and should be considered first. Other tools may only be used if they have gone through the Information Technology Investment (ITI) process and have been approved by EIS.

Answer:
Yes. Any AI tool used for state business (other than web search engine use as noted) must be approved through the appropriate IT and EIS review, regardless of data level.​

Answer:
New AI features, even in an existing tool, must be reviewed via the information technology investment (ITI) process and approved by EIS before use.​

Answer:
Generally, yes. If they document state business or support decisions, they are public records and must follow normal records management and retention rules.​

Answer:
No. AI output must always be reviewed by a human and must not be the sole basis for official decisions or statements.​

Answer:
For external documents, disclosure is recommended (for example, a note or footnote). Your agency may set more specific rules.​

Answer:
The “AI for Public Professionals” course in Workday is the recommended baseline. Additional Copilot training is available through the M365 training library and Microsoft scenario examples.​

Answer:
State employees with an M365 Copilot license must complete AI for Public Professionals (or another EIS-approved course). Others are encouraged but not currently required.​

Answer:
Only Level 1 (“Published”) and Level 2 (“Limited”) data are allowed. Level 3 (“Restricted”), Level 4 (“Critical”), and regulated data must not be used.  Please see the latest guidence and policy on data usage on the EIS Polices, Procedures, and Guidenec page.

Answer:
Yes. Classify your inputs first and only use Level 1–2. Treat and label GenAI outputs at the highest sensitivity level of the input data.​

Answer:

AI tools and services must go through the information technology investment (ITI) process and normal procurement review, including security, privacy, and data handling terms in contracts. “Built-in” AI features are treated as a change in risk and must be reviewed and approved before use.​