Skip to content

Understanding the Mathematics Behind Modern AI

At University of Colorado Denver, foundational research is shaping the next generation of intelligent systems.

Understanding the mathmatics behind modern Ai headhot of Alejandro Parada-Mayorga, PhD.

In the age of artificial intelligence, breakthroughs often arrive in the form of new algorithms or powerful applications. But behind many of those advances lies a deeper layer of research: the mathematical foundations that explain why modern AI systems work and how they can be improved.

For Alejandro Parada-Mayorga, Ph.D., assistant professor of electrical engineering in the College of Engineering, Design and Computing at the University of Colorado Denver, those foundations are the focus of a growing research program exploring the intersection of signal processing, machine learning, and advanced mathematics. The goal is not just to build smarter systems, but to understand them completely, and in ways that unlock entirely new possibilities.

Where Ai Gets Its Power

Every modern AI system, from recommender engines to network analysis tools, relies on the same underlying idea: learning patterns from data. But as those systems grow more complex, so do the questions behind them.

Parada-Mayorga’s latest research paper tackles some of those questions at their root, focusing on a framework known as a reproducing kernel Hilbert space (RKHS), a powerful mathematical structure that underpins much of modern machine learning.

“To understand the gap, it helps to know what an RKHS actually is,” Parada-Mayorga says. “At its core, a reproducing kernel Hilbert space is a special mathematical setting where functions behave very nicely: you can evaluate them at specific points, compare them through inner products, and represent them as combinations of simpler ‘kernel’ building blocks. Think of it as a structured language for describing signals and functions that is rich enough to capture complex patterns, yet disciplined enough to support rigorous mathematical guarantees.”

A central class of examples are graphon-based systems — mathematical models of large-scale networks that allow signal processing and learning to transfer across graphs of different sizes — which is one of the motivating applications for this line of work. This structured framework allows researchers to move beyond trial and error.

By working within RKHS, they gain a disciplined, mathematically grounded way to model and manipulate signals essential for the sophisticated tasks modern AI systems must handle. “RKHS are important in machine learning and signal processing precisely because they provide a principled foundation for learning and filtering,” says Parada-Mayorga. “They tell you not just what a good solution looks like, but why it works and how to find it efficiently.”

With this foundation in place, the research tackles practical challenges in designing AI systems. “This paper specifically addresses a question that matters for how neural networks are designed and trained on continuous or large-scale network data: how do you represent and learn filters in a principled, mathematically sound way?”

A Breakthrough Built on Collaboration

This advancement didn’t happen in isolation. It emerged from years of sustained collaboration and a shared effort to push beyond the limits of existing theory.

“This paper grew out of a long-standing collaboration with Alejandro Ribeiro at Penn, who was my postdoctoral advisor, and Juan Bazerque at the University of Pittsburgh,” Parada-Mayorga explains. “I have been building this research program for years — each paper opens new questions, and this one was specifically motivated by a gap we noticed in our own prior work.”

That gap led to a deceptively simple question with far-reaching implications.

“We had shown in prior work how RKHS structures can be used to construct convolutional algebras,” Parada-Mayorga says. “And the natural next question was the inverse: when you filter a signal through an integral operator, does that filtering process itself induce an RKHS structure?”

“The answer turned out to be yes, in a deep and beautiful way,” he says, “and tracing all the implications of that took the three of us working together closely…This work is part of a longer arc — we’re developing a unified algebraic framework for signal processing that goes beyond polynomial algebras, and RKHS theory is one natural waystation on that path toward richer mathematical structures.”

Insight to Impact

At the core of the paper is a key realization that connects abstract mathematics directly to how modern AI systems operate.

“There was a key realization that felt like a genuine eureka,” Parada-Mayorga says. “We recognized that the range of an integral operator — the set of all possible outputs you get when you filter signals — naturally carries the structure of a reproducing kernel Hilbert space, and that the reproducing kernel is determined by a specific operation on the filter symbol called the box product. That’s a clean fact well known in functional analysis, but ignored in multiple signal models that rely on integral operators. The box product is essentially a structured way of combining filter symbols that encodes how filtering in one domain induces structure in another, it turns out to be the right operation for tracking how integral operators pass RKHS structure through.”

But insight alone is not enough. Translating that moment into a usable framework required extensive theoretical development.

“Turning that observation into a full theory — characterizing the algebraic structure, proving equivalence between classical and RKHS-based filtering, establishing spatial-spectral tradeoffs, and showing how it all applies to learning — that was very much a slow grind of incremental work, checking every step carefully.”

The payoff is both conceptual and practical.

“Our result showing that optimal learned filters can be expressed as finite expansions in kernel functions isn’t just theoretically elegant, it provides a concrete, computationally efficient approach to building better learning systems,” Parada-Mayorga says. “The path from abstract theory to practical tools is shorter than most people expect.”

The Rhythm of Theoretical Research

Behind the breakthrough is a process defined by rigor, iteration, and deep thinking.

“A lot of it involves filling whiteboards and notebooks and staring at equations,” he says. “Only once I feel I have a solid grasp of the fundamentals do I begin considering numerical experiments.”

He credits his mentor and collaborator, Alejandro Ribeiro, with shaping that approach.

“My postdoctoral advisor and mentor, Alejandro Ribeiro, has a phrase I love: ‘if you understand the theory well, the numerical experiments feel tautological,’” says Parada-Mayorga. “What he meant is that theory should give you such a clear picture of the structure that experiments confirm rather than discover — they’re not exploring the unknown, they’re verifying the inevitable.”

That mindset underscores a central principle of the work.

“This paper includes experiments where we actually learn filters using the RKHS framework we developed,” he says. “Theory that doesn’t connect to computation is incomplete. But the computation earns its meaning from the theory that precedes it.”

A Place for Foundational Research

This kind of work requires an environment that values both depth and long-term impact—and at CU Denver, that environment is intentional.

“The most important ingredients for this kind of work are intellectual freedom and good collaborators and I’ve been fortunate to have both here,” Parada-Mayorga says. “I feel particularly grateful to our Chair, Mark Golkowski, for his genuine support of this research direction.”

“Fundamental mathematical work can be a harder sell in an engineering department, where students and industry partners are naturally drawn to applied, immediately tangible results,” he explains. “But Mark has understood from the start why this kind of foundational work matters.”

That support aligns with Parada-Mayorga’s broader vision.

“My long-term goal is to build at CU Denver a recognized center for Abstract and Mathematical Signal Processing — a research identity that bridges rigorous algebraic foundations with the practical demands of modern machine learning and signal processing,” Parada-Mayorga says. “What makes that ambition realistic is that this kind of research doesn’t require massive infrastructure or expensive equipment — it requires sustained focus, intellectual rigor, and a community that takes foundational questions seriously.”

A Place for Students at the Frontier

For students considering CU Denver, Parada-Mayorga offers a clear invitation.

“If you’re drawn to deep mathematical questions at the intersection of signal processing and machine learning, I’d say: come see what it feels like to work at the frontier of a field.”

Students in the Abstract Signal Processing Lab (Parada-Mayorga’s research group) work at the intersection of signal processing, mathematics, and machine learning, a combination that’s rare in engineering programs and increasingly in demand as the field matures.

He shares his own journey to illustrate why the university’s approach matters.

“When I was a young engineering student, I kept running into questions I couldn’t answer with the material from my ECE courses alone. Not because the courses were lacking, but because I wanted to understand why the tools worked — not just how to use them,” says Parada-Mayorga. “So I started taking extra courses in the mathematics department. Not with any intention of becoming a mathematician, but with the goal of building a complete and honest understanding of the foundations underneath the engineering. What I discovered was that having that solid mathematical background completely changed the way I could see problems. It gave me a different vantage point — one that let me make connections I wouldn’t have seen otherwise.”

For Parada-Mayorga, this perspective is central to the excitement of his research.

“That’s one of the things I find most exciting about this line of research: it gives you the space to question conventional thinking in classical engineering,” he adds “Sometimes you find that a concept from functional analysis or abstract algebra unlocks a problem that classical methods couldn’t touch — and that feeling is extraordinary. If any of that resonates with you, we should have a conversation.”

From Mathematical Insight to Real-World Ai Impact

Looking ahead, Parada-Mayorga highlights the rapid acceleration of convergence between signal processing, machine learning, and both pure and applied mathematics.

“The convergence happening right now between signal processing, machine learning, and pure/applied mathematics is accelerating,” Parada-Mayorga says. “I think we’ll see AI systems that can operate rigorously on increasingly complex, heterogeneous data — not just images and text, but physical networks, sensor arrays, biological systems. That’s a challenge that foundational research like this is positioned to address, by providing the theoretical scaffolding that practitioners will need as the complexity of data grows.”

In other words, future AI systems will need to handle more diverse, interconnected forms of data than ever before while maintaining mathematical rigor, a challenge his work directly addresses.

“This paper is part of that broader movement,” adds Parada-Mayorga. “We’re extending classical signal processing concepts like bandlimitation and spatial-spectral localization into settings defined by integral operators and graphons, which are the right mathematical objects for modeling large-scale networks.”

By uncovering how integral-operator filtering induces RKHS structures, his research transforms abstract mathematical theory into tangible tools for designing and learning filters that scale efficiently and predictably.

“The field is building toward AI that is not just empirically powerful but theoretically understood.”

And that shift starts with research like this—work that asks deeper questions, builds stronger foundations, and creates new possibilities for what engineering can achieve.

“This paper establishes something abstract — that integral-operator filtering induces RKHS structure through the box product — and that abstract fact immediately yields concrete tools,” Parada-Mayorga says. “The more carefully you understand the structure of a problem, the more effectively you can solve it.”

Ready to work at the frontier of AI and mathematics? Discover what’s possible at University of Colorado Denver.

Categories

News


Discover more from College of Engineering, Design and Computing News

Subscribe to get the latest posts sent to your email.

CU Denver Engineering, Design and Computing's avatar

CU Denver Engineering, Design and Computing View All

At the CU Denver College of Engineering, Design and Computing, we focus on providing our students with a comprehensive engineering education at the undergraduate, graduate and professional level. Faculty conduct research that spans our five disciplines of civil, electrical and mechanical engineering, bioengineering, and computer science and engineering. The college collaborates with industry from around the state; our laboratories and research opportunities give students the hands-on experience they need to excel in the professional world.

Leave a comment