Please share the details below and we will be in touch shortly...
Steven Walchek and his company Liminal help enterprises turn real-world work patterns into self-discovering, self-assembling agents that deploy and evolve without manual design
.png)






Steven Walchek is the Founder and CEO of Liminal, a platform helping large enterprises in regulated industries securely scale generative AI through governance, visibility, and multi-model access. He has spent more than 15 years building and scaling companies across fintech, cloud, and enterprise software, with a career focused on turning emerging technology into operational advantage.
He has been involved in three successful exits, including DebtMarket, the fintech company he co-founded, which was acquired by Intercontinental Exchange (NYSE: ICE). Prior to Liminal, Steven was EVP and Chief Innovation Officer at FIS, where he led corporate venture initiatives, built new business lines, and oversaw the launch of multiple subsidiary companies.
Earlier in his career at Amazon Web Services, he helped expand the partner ecosystem and drove significant new revenue growth while working on early machine learning and cloud infrastructure initiatives.
Today, Steven focuses on helping regulated industries adopt AI safely at scale, ensuring innovation does not come at the cost of security or control. Steven grew up in the Silicon Valley area, surrounded by his father’s entrepreneurial ventures and a strong appetite for risk. He often credits this environment with shaping his own entrepreneurial mindset from an early age.
Steven Walchek is the Founder and CEO of Liminal, a platform helping large enterprises in regulated industries securely scale generative AI through governance, visibility, and multi-model access. He has spent more than 15 years building and scaling companies across fintech, cloud, and enterprise software, with a career focused on turning emerging technology into operational advantage.
He has been involved in three successful exits, including DebtMarket, the fintech company he co-founded, which was acquired by Intercontinental Exchange (NYSE: ICE). Prior to Liminal, Steven was EVP and Chief Innovation Officer at FIS, where he led corporate venture initiatives, built new business lines, and oversaw the launch of multiple subsidiary companies.
Earlier in his career at Amazon Web Services, he helped expand the partner ecosystem and drove significant new revenue growth while working on early machine learning and cloud infrastructure initiatives.
Today, Steven focuses on helping regulated industries adopt AI safely at scale, ensuring innovation does not come at the cost of security or control. Steven grew up in the Silicon Valley area, surrounded by his father’s entrepreneurial ventures and a strong appetite for risk. He often credits this environment with shaping his own entrepreneurial mindset from an early age.

Steven Walchek believes the problem of realizing value from enterprise AI stems from a failure to connect automation to how work actually happens. Nearly 80% of companies report using generative AI, yet an equal number see no meaningful bottom-line impact, and most organizations see zero return despite high investments.
Steven points out that most organizations are trying to decide what to automate based on assumptions from leadership discussions, employee interviews, or consulting advice. These inputs are often incomplete, biased, and inconsistent across teams, which makes it difficult to even identify the right parts of the business to automate. As a result, companies end up automating the wrong workflows entirely and inefficiently.
Steven believes most AI programs fail because companies try to plan automation before understanding how work actually happens. They guess the workflows instead of learning from real day-to-day behaviour. As a result, they build tools around an “ideal” version of work that doesn’t match reality. In regulated industries especially, this leads to small wins at best that never scale or deliver real ROI.
Steven urges CIOs to use real behavioural signals to reveal where work is actually breaking down, slowing down, or repeating. Through Behavioral Agent Automation Platforms (BAAPs), he and his team enable organisations to discover automation opportunities directly from how employees work, using those insights to deliver continuous, adaptive automation that compounds into enterprise-wide impact.
Steven believes most enterprises are scaling AI the wrong way. They start with a single model, then add more tools as new needs emerge, creating systems that become expensive, fragmented, and difficult to govern at scale. He points out that the most effective organisations combine three things: multi-model flexibility, strong security, and cost efficiency.
He sees enterprise AI moving beyond single-model use. One model rarely meets the full range of enterprise needs, so employees naturally adopt different tools to get work done. Without control, this leads to fragmented usage, reduced visibility, security risks from shadow AI, compliance gaps, and rising costs.
In regulated industries especially, this fragmentation creates governance and operational challenges. The organisations scaling AI most successfully are moving to centralized systems that combine secure access, multi-model flexibility, and oversight in one controlled environment.
Steven helps mid-market enterprises in regulated industries scale generative AI through a secure, multi-model platform built on governance, visibility, and cost efficiency. The platform integrates application-layer security, administration, observability, and data privacy controls, enabling safe AI use without compliance or data risks.
Instead of being tied to a single provider, teams can access leading models like OpenAI, Anthropic, Google, Perplexity, and Grok through one controlled system. He helps reduce AI costs through a model-agnostic approach that is significantly more cost-efficient than using individual providers directly, while maintaining central oversight.
Steven argues that most enterprise AI strategies are built on the wrong premise: that governance is something you layer on after adoption. By the time security teams review tools, employees are already using multiple AI models across workflows, often without visibility into how data is being used or where it is going.
Governance alone can’t keep up with how AI is actually used. Employees will always choose the fastest, most capable tools as new models emerge. Restricting usage or standardising on a single provider creates friction, slows innovation, and pushes AI activity further out of view.
Steven explains that leading enterprises are shifting from governance-first thinking to an enablement model, where employees get easy access to the best AI tools for any task, while security teams maintain real-time control over data, usage, and policy enforcement.
This requires a unified, model-agnostic layer between users, data, and AI systems. Organisations can proactively enable flexible AI use across any model or application, with built-in data protection, governance, and full visibility into every interaction.
Through Liminal, Steven Walchek and his team help regulated enterprises adopt this model. Their platform enables secure, unlimited access to leading AI models while protecting sensitive data, enforcing policies, and providing a central view of AI use across the business.
Steven sees a clear pattern in how leading enterprises are scaling AI successfully. The organisations pulling ahead are the ones that combine security, multi-model flexibility, and intelligent automation into a single system rather than treating them as separate initiatives.
Most enterprises still struggle to scale generative AI. As more tools and models emerge, employees gravitate toward whatever works best for their task. This improves productivity in the moment, but it also creates fragmented usage that leads to security, governance, and visibility challenges.
Steven explains that leading organisations address this by establishing a secure, multi-model foundation with central governance. Teams can work across models and workflows without friction, while every interaction creates a clearer picture of how work actually happens.
That visibility makes intelligent automation possible. Patterns begin to emerge across repeated tasks, common workflows, and points of friction. From there, automation can be introduced directly into the flow of work, where it continues to improve based on real usage.
Scaling AI comes down to integrating three capabilities from the start: strong data protection, consistent governance, and deep visibility into how AI is used across the organisation. Together, they create the conditions for systems that adapt and improve over time.
Steven believes most enterprise AI programs fail to scale not because the technology isn’t ready, but because organisations move too fast too soon. Many companies jump straight into trying to implement advanced automations and end-to-end AI workflows before they’ve built the basic foundations around data access, user behavior, governance, and visibility.
He explains that the organisations succeeding with AI are taking a “crawl, walk, run” approach. They start by giving employees secure access to general-purpose AI tools in their daily work, creating immediate productivity gains, building AI familiarity across teams, and generating real usage data on where AI is actually valuable.
The crawl phase allows security and IT teams to establish control early: setting policies, monitoring usage, and building visibility into how AI is being used across the business. From there, organisations can safely move into more advanced automation and orchestration, scaling AI on top of a solid foundation rather than starting from scratch.
Steven helps regulated enterprises put this approach into practice by enabling secure, governed access to AI from the start without moving too fast and losing control, visibility, or governance.
If there is a specific topic you would like Steven to focus on during the interview that is not listed here, please let us know.
We would be more than happy to run this by Steven to see if he would be able to discuss it in detail and deliver value to your audience.