Podcast: Rethinking Music Creation: A Deep Dive with Roland’s Paul McCabe

For this episode of the RealMusic.ai Podcast, David O’Hara sits down with Paul McCabe, musician, composer, and Senior Vice President of Research and Innovation at Roland. Paul leads the Roland Future Design Lab, a global R&D group focused on exploring new possibilities in music creation, instrument design, and the role of AI in hardware. His nearly forty years in the industry give him a rare perspective on where music technology has been and where it’s heading.

Paul begins by sharing how his early years working in music stores during the rise of MIDI shaped the way he understands musicians and the tools they rely on. Instead of thinking in features or specifications, he learned to translate complex technology into something creative people could actually use. That insight still drives how he approaches innovation today.

A major part of this conversation explores how AI is beginning to move into musical hardware. While most discussions focus on AI-powered software, Paul explains how embedded AI models could eventually live inside instruments themselves. These systems could learn a player’s style, adapt to their skill level, guide them through practice, or even act as a creative partner. None of this replaces human musicianship. For Paul, AI is at its best when it enhances creativity rather than trying to imitate or replace it.

He also reframes the real barriers to music making. It’s not just talent, time, or money. Today, the biggest obstacle may be focus. With so many distractions, it’s difficult for people to stay engaged long enough to build new skills. Paul discusses how lessons from gaming and interactive experiences might help instruments keep players motivated in new ways. AI could support learning and creativity by making the process more personal, responsive, and rewarding.

The episode also looks at the ethics and responsibility side of AI. Paul explains the story behind AIforMusic.info, the initiative Roland launched with Universal Music Group to create shared principles for the responsible use of AI in music. These guidelines emphasize protecting human creativity, respecting artist rights, and ensuring transparency. Over 150 organizations have already joined the effort, making it one of the most important developments in AI and music.

Paul also shares insight into Project Lydia, Roland’s neural sampling proof of concept built on a Raspberry Pi using technology from Neutone. Instead of developing this behind closed doors, Roland is putting it directly into the hands of musicians for real-world feedback. It’s a look at how experimentation, community input, and AI research can come together to shape the next generation of instruments.

This conversation isn’t about hype or fear. It’s about the real opportunities ahead for musicians, technologists, and creators. Paul offers a thoughtful, grounded view of how technology can support human creativity and expand what’s possible without losing what makes music deeply personal. For anyone curious about the future of instruments, learning, and music creation, this episode opens a meaningful window into what may be coming next.

What you’ll learn in this episode:
• How AI is beginning to shape musical hardware, not just software
• Why the real barriers to music making are changing, especially around focus and engagement
• How instruments might adapt to players using embedded AI models
• Why Roland partnered with Universal Music Group to launch AIforMusic.info
• How Project Lydia demonstrates the future of neural sampling and user-driven R&D
• Why Paul believes AI should support, not replace, human creativity

Watch or listen to the full episode.

Video Episode: YouTube and Spotify

Audio Episode: Apple Podcast, Podbean, Amazon Music/Audible, iHeart Radio, Player FM, Spotify, Listen Notes, Podchaser, Boomplay

Previous
Previous

The New Middle Layer of Music Creation No One Talks About

Next
Next

Why AI in Music is Exponentially More Complex Than Any Shift the Industry Has Seen