- Utah's $100M AI initiative aims to balance ethical AI with job security.
- Gov. Cox's pro-human agenda focuses on intuition creativity and human-centric roles.
- Regulatory sandbox allows AI testing in sensitive areas under strict state oversight.
Editor's note: This is the first of three stories in a series that examines Utah's approach to artificial intelligence and its ongoing governance.
LEHI — In the "Silicon Slopes," the conversation around artificial intelligence is shifting. It's no longer just about what the machines can do; it's about what they should do.
As Utah Gov. Spencer Cox pushes a "pro-human" agenda, announced at the end of the 2025 Utah AI Summit in December, the state is quietly positioning itself as a global laboratory for ethical AI — led not by political rhetoric but by high-stakes pragmatism.
But for the average Utah family, the question remains: Is this a marketing buzzword, or a shield against job displacement? The two men at the center of this storm are Manish Parashar, the University of Utah's inaugural chief AI officer, and Kevin Williams, CEO at Ascend, a leading Utah AI consultancy business and advisor to the state's Responsible AI Initiative.
The goal of this $100-million pivot, according to these experts, is to ensure the human worker remains the protagonist of the story.
A calculator moment
For parents worried about their children's education, Parashar offers a historical perspective. He compares the current AI anxiety to the introduction of the calculator.
"There was a lot of concern that it would take away the ability to do math," Parashar says. "Instead, it allowed us to focus on things that were more important. AI accelerates the processes, but it allows us to focus on things that are uniquely human: intuition, creativity and imagining new solutions."
Parashar, who leads the One-Utah Responsible AI Initiative, argues that the value of the human worker is moving away from "answering" and toward "initiating."
"AI is a good way to come to a solution once you know what the problem is," Parahsar explains. "But who initiates? Who asks the question, 'Could this be better?' That's human. We are moving toward a world where curiosity is the ultimate future-proof skill."
The skill flip: Judgment over syntax
However, the skills required for that "initiation" are flipping. Williams, who has two teenagers of his own, notes that the "go-to" skills of the last decade — hardcore coding and data analysis — may ironically be the most subject to AI displacement.
"The human side of humans is what will be in higher demand," Williams says. "Liberal arts skills like judgment, discernment, leadership and flexibility — these are the things that are going to be needed."
Parashar agrees, noting that the University of Utah is doubling down on "durable skills." He believes AI shouldn't be seen as a replacement for learning, but as an "accelerator" for human potential. "AI allows us to focus on things that were previously buried under manual processes: intuition, creativity and imagining new solutions to age-old problems," Parashar said.
The 'job-hugging' reality
While academia looks at the horizon, Williams is in the trenches with Utah businesses. He's seen a new phenomenon emerge: "job-hugging."
"People aren't dumb," he says. "When an AI consultant shows up, they assume I'm there to displace them."
However, Williams argues that the most successful companies aren't using AI to cut heads, but to solve "tribal knowledge" gaps — the nuances that an algorithm can't see but a veteran employee understands implicitly. By automating the tasks people hate — or aren't good at — workers are being pushed toward "high-leverage" roles. "Even if a role is being augmented, there is still a role for the person who understands the nuances of the job. It's about putting people at the right inflection point."
However, Williams offers a sobering warning about the "pro-human" future. If we automate all the "boring" administrative tasks and leave humans only at the high-consequence inflection points, we fundamentally change the nature of work.
"If you put people at those 'crux points' all day long, what does that job look like? Williams asks. "Are we all going to end up feeling like an ER doctor? We have to ensure we aren't just creating a new kind of burnout by making every minute of the workday a high-stakes decision."
Does Utah regulation have teeth?
Skeptics often point to Utah's business-friendly climate as a sign of weak regulation. But both leaders disagree. Williams points out that the Utah Department of Commerce has already taken on tech giants like TikTok and Meta.
Utah's unique "regulatory sandbox" allows companies to test AI in "high-consequence" areas — like mental health and prescription renewals — under strict state oversight. "It's a pragmatic state," Williams notes. "If we have an underserved youth population in need of mental health support, and AI can provide that safely, it becomes a question of the lesser of two evils. The state is willing to experiment, but they are serious about social protection."
The verdict for 2026
As we dive into the legislative session, the message from the top is clear: The horse is out of the barn. Parashar emphasizes that the goal of the $100-million initiative is to ensure Utahns aren't just "users" of AI, but the architects of it. Experts agree that banning AI in schools or offices isn't just futile; it's a competitive disadvantage.
"Bans don't work," Williams says. "And it will set back the student population if they aren't entering the workforce with the skills they need."
For Utahns, the path forward isn't in competing with the speed of an algorithm, but in doubling down on the "durable skills" of judgment, discernment and "humanity" that no large language model can replicate.
In the next installment of this three-part series on Artificial Intelligence in Utah, Margaret Busse, the architect of Utah's Office of AI Policy, goes inside the "sandbox," where the nation's first legal AI medical prescriptions are being written — and regulated.
Read more:










