When the team at the Parkland Center for Clinical Innovation set out to build AI-powered clinical tools, they knew the challenge went beyond creating a technical solution that worked. They also needed to find a way to make sure frontline clinicians would trust and use the tools to help patients.
President and CEO Steve Miff shared the story in a session at Convergence AI Dallas, a two-day conference focused on AI trends and innovation across the Metroplex. He was joined by Mirna Abyad Baloul, an IP strategist and lawyer with Fulton Jeang PLLC, for a breakout session on responsible AI and governance.
“The core of it is transparency,” Miff said. “I believe transparency leads to trust, and trust is what is required to be able to actually meet not only the compliance, but the deployment.”
Small team, big reach
Parkland Center for Clinical Innovation is a nonprofit innovation organization focused on using AI, nonmedical drivers of health, and connected communities of care to improve health outcomes. While it has just 40 employees, PCCI is affiliated with Parkland Health, the publicly owned hospital system for Dallas County. This gives it the agility of a small organization and access to significant real-world deployment.
That combination has proven to be a meaningful advantage. One of PCCI’s AI-powered tools is a model for predicting the risk of mortality of trauma patients that launches directly from the electronic medical record in the emergency department. But rather than just giving providers the patient’s mortality risk score, the model also displays the top five contributing factors in real time. This gives doctors and nurses important context at the exact moment they are making decisions about care.
This type of prediction transparency is baked into PCCI’s framework for trustworthy AI, Miff said.
“We found that this is, again, one of the most impactful things,” he said, “not only from a compliance but from a usability perspective.”
The monitoring problem
So far, Miff said, PCCI has developed 19 clinical AI models internally, generating 34.9 million patient predictions to support clinical decisions and population health. Many of the predictions are related to serious medical conditions, including pediatric asthma, HIV, sepsis, and colorectal cancer, and they have identified 2.8 million people as being good candidates for early interventions because they are at high risk.
With that scale, however, came a new challenge: how to monitor all the models to make sure they continued to function correctly.
The human cost was high, Miff said. As PCCI deployed more AI models, team members spent more time monitoring them, pulling them away from building new ones. It was a concerning trend.
“I’m going to lose all my team because they came to innovate,” he said. “They didn’t come to monitor a model.”
The solution, Miff explained, was to build an AI monitoring layer on top of the deployed models. This “AI on top of AI” monitors each model’s performance in real time and alerts when there are deviations worthy of a closer look.
Responsibility doesn’t outsource
PCCI certainly isn’t alone in this challenge, Baloul said. As organizations across industries move from proof of concept into scaled deployment, the monitoring burden can begin to drain the innovation culture. Yet, simply trusting an AI model without verification is risky.
“That is the heart of the AI problem,” she said. “At what point do you just trust? And the answer is you should never trust. You should always have a threshold of check.”
From Baloul’s lens, everyone working with AI needs to remember that with innovation comes responsibility. Government regulation of AI is lagging years behind the technology, requiring companies to be on their toes.
“When you have AI and AI agents, it’s kind of like having an employee,” she said. “…The output of an AI agent is actually on the company itself.”
The legal exposure follows the same logic, she said. “If you have a lawsuit, I can’t sue AI.”
Baloul recommends companies use a combination of technology-driven quality checks and keep a human in the loop until they are sure AI outputs and actions meet the company’s established quality threshold.
AI governance as competitive advantage
For Miff, creating a framework for trustworthy AI is core to PCCI’s work enabling the technology to transform health outcomes. Patients, providers, health systems and payers all need to know that the models are accurate, perform consistently, secure and meet compliance rules.
The lesson he shared with companies at Convergence AI Dallas that are looking to deploy AI at scale is that transparency is key at every step.
“Once you deploy,” Miff said, “you need to make sure that the end user trusts that these perform the way that we intended to perform.”
Don’t miss what’s next. Subscribe to Dallas Innovates.
Track Dallas-Fort Worth’s business and innovation landscape with our curated news in your inbox Tuesday-Thursday.




![Social entrepreneur Byron Sanders, a former nonprofit exec, is CEO of Arete Health, launched in January 2025. [Photo: Michael Samples]](https://s24806.pcdn.co/wp-content/uploads/2025/10/27_ByronSanders-STEM-STEAM-STREAM-970_courtesy_Oct2019-1.jpg)








