WarpSound

Posted on Jan 19, 2022Read on Mirror.xyz

WarpSound Genesis

by Chris McGarry, Founder & CEO, Authentic Artists / WarpSound

It’s 2004 and I’m standing on top of a tractor trailer-turned-DJ booth in the middle of a desert-turned-city. The night is cold but the vibe is electric. I’m wearing a faux fur coat that looks like it was taken off the back of a synthetic super yeti. Whatever. It’s warm. It works. Stretched out before me is a sea of beatfreaks and bassheads, moving as one to the throb of a 50-kilowatt sound system. The trailer roof rumbles under me, sending the music through my feet and into every cell of my body.

A full-scale replica of Stonehenge encircles the crowd. This Stonehenge is ancient but new. The stones are white, luminous. Embedded in their core, audio-reactive LEDs pulse with the kick of the drum. Extending from the towering 12-o’clock stone and piercing the dark horizon is a rippling corridor of light. It’s a shimmering onramp to the dance floor. It’s a Sonic Runway. With each kick of the drum, a ray of light rockets down the runway at the speed of sound, greeting travelers with pulses of color and song where they stand.

In this moment I am alive. This is what it means to be free. This is why we build together. This is what it means to be a part of something greater than myself. I thank god, and I thank music. In this moment, they feel the same.

Sonic Runway, Stonehenge, Burning Man 2004.

***

When I sat down to share some thoughts about our NFT project, WVRPS by WarpSound, I wasn’t sure where to take it. As anyone who has been a part of any creative collaboration or close-knit team knows, the lines where one person’s ideas end and another’s begin quickly become so blurry as to be meaningless. Like the Burning Man festival scene I described, WarpSound and WVRPS are the sum total of many individuals’ best thinking, inspiration and energies. Now, every WVRP holder is a part of this collective. WarpSound is and will continue to be a living thing shaped by everyone who touches it.

That said, every story has beginnings.

The story of WarpSound is a love story. More specifically, it is a love story about music. Everyone building WarpSound shares this love. To hear my mother tell it, my love of music started before I saw the light of day. A plastic, orange heart-shaped music box that played Brahms’ Wiegenlied, the most well-known lullaby in the world, sparked it.

Every once in a while I take the heart out and pull its string. Even now, this music-making machine transports me with its familiar tune. The heart and I have a relationship--a real and deeply personal one—that spans my entire life.

In recent years, this humble object has taken on new meaning. There are many ways to describe WarpSound, but on the most basic level we make musical machine hearts. The AI tools that we use may be more advanced than the pin and comb music box technology of the past, but my guess is that we share a common purpose with the makers of my childhood heart: to awaken a deeper connection with music, and in doing so to bring more joy, beauty and play into the world.

If it sounds like I’ve gotten too high on my own music supply, then I won’t disagree. In fact, I’d encourage you to do the same.

***

My first instrument was a sawed off, 1/16-size violin. My first proper musical education soon followed with Suzuki method group classes. In high school, I joined the San Francisco Symphony Youth Orchestra. Most of the rehearsals from those years are a blur. The one that world-renowned cellist Yo-Yo Ma visited is not.

That day, Yo-Yo was working on a passage with the 1st trumpet player. He told the trumpet, “We need a different quality from you. Listen.” Digging the horse hair of his bow into the steel core strings of the cello, he played the surprised trumpet player’s part, note for note. Suddenly, the object producing the sound wasn’t the thing. The cello wasn’t the thing. The human giving it meaning and power was. In that moment, Ma was a cellist second. He was a creator first. The tools and techniques he was using to generate the music were incidental.

Yo-Yo Ma.

Over time my musical tastes and pursuits expanded beyond classical music. My path converged with Prince. I recall an afternoon soundchecking at San Francisco’s legendary Fillmore. The stage was packed with a full ensemble: horns, backing singers, multiple guitars, a beefed up rhythm section. The band was in the pocket and charging full tilt when Prince paused his vocal, told the band to play on, and directed the engineer to tweak the upper-mid frequency range of one of the horns. The engineer micro-adjusted the EQ on the mixing board and Prince nodded his approval, returning to his vocal.

Prince, The Fillmore, 2004.

To this day, I don’t know exactly how (or why) Prince located that imperfection in the wall of sound. I do know that his supernatural awareness of every fiber of the music and attention to detail deepened my understanding of live performance mastery—or mastering anything.

That was the last time we came face to face. A few days later I was a no-show for a follow up meeting about the future. I was still “celebrating” the success of the shows. If you’ve ever been your own worst enemy—if you’ve ever burned a dream—then you can relate. If you do, then there is a part of WarpSound that is already a part of you. May it be small proof that your dreams don’t ever have to die.

Everyone on our team has experiences with music that are woven into the fabric of WarpSound. These are just a few of mine.

***

The seed of what would ultimately become WarpSound was planted in 2016 when I was developing a music strategy for Oculus VR. Expectations for the new medium were high, and I had the chance to walk creators like Eminem, Drake and Skrillex into VR for their maiden voyage. The experience was always the same—some awkward fumbling strapping the person into a gen 1 Rift, then watching jaws drop and imaginations fly as they stepped into what we now call the metaverse.

Skrillex in one of his first VR experiences with Oculus.

Some of the early VR music content was great. A lot of it wasn’t. It was clear that even the best offerings were just scratching the surface of what a new immersive, interactive platform could deliver. Future forward creators like Lil Wayne, Kygo, Gorillaz, and Major Lazer leaned in and experimented. Others wanted to wait and see.

I didn’t need any convincing. I was down the rabbit hole with a pound of prototype vision tech hanging off of my face. An infinite new world was unfolding before our eyes. With this new world would come new stages. Early builders like Adam Arrigo at Wave and Jeff Nicholas at Live Nation were already building them. More on Jeff later…

Early iteration of the Wave VR experience.

With new stages would come new music. What would the artists look like? How would audiences relate to them? What would the music sound like? How would explorers of these new worlds experience it? There were more questions than answers. As we plunged into the unknown, at least one thing seemed certain: the new answers wouldn’t be the same as the old ones. Besides that, it didn’t seem like just replicating what already existed would be that exciting.

Around this time, a team of AI researchers and engineers from Google Brain was doing work that offered a few clues to the answers to some of these questions. A Senior Scientist named Doug Eck launched a project in 2017 called Magenta to “explore the role of machine learning as a tool in the creative process.” Doug and team were developing new algorithms for generating music—tools that could be used to enhance human creativity and fuel new art. Could these new tools be used in service to these new worlds? Could they unlock new music experiences? Could they help ignite new music culture? I talked to the team and became convinced that the answer was YES.

I also appreciated that music’s history was filled with examples of new tools unleashing waves of new music and culture—from inventions like the microphone and amplifier to the Technics 1200, Pro Tools, and Auto-Tune. Would creative machine learning have a similar impact and propel music forward? Again, I became convinced that the answer was YES.

It was time to start building.

We started a generative music company. What we were building was clear: real time, interactive music for virtual worlds. Musical machine hearts. Also, the music had to scale and do it live. After all, the emerging virtual stage was infinite.

We knew who we were. We were (and are) musicheads and metaverse maximalists.

We knew why we were building: to bring humans closer to music by giving them new ways to explore and experience it. To awaken a deeper connection.

We knew how we would do it: by embracing new tools and technologies like the ones Eck and team were pioneering.

Finally, we decided where we were going to build: in that mysterious place where music and identity converge. We were going to make artists, not just music.

This last point was a subject of some debate. We might achieve our goal of awakening a deeper connection with music without approaching identity. We just thought it would be a lot harder (and less fun). As humans, we seem to have an easier time connecting with the seen than the unseen. And when we connect the most deeply with music, we connect not only with music itself, but also with the source. In our consciousness, the source of the music and the music itself are often inseparable. In reality, the source of the music and music itself are also deeply intertwined. They exist in harmony. Think of the nightingale and it’s song or the humpback whale and it’s music. Think of my heart-shaped music box and it’s lullaby. Think of The Beatles and A Hard Day’s Night, The Ramones and Ramones, Madonna and Like A Virgin, NWA and Straight Outta Compton, Daft Punk and Homework, BTS and Be.

The jury was in. If we were going to use machines to make music, we were going to need to answer yet another question: who—or what—is the source? We had to locate and express its essence–it’s identity– just as so many artists (and their managers) have had to find theirs.

Before trying to find something else’s essence, we decided we needed to find ours. We created a set of principles and beliefs to guide us in the pursuit of our vision:

Music is our lifeblood.

We embrace music as humankind’s first and most powerful language. We unleash music’s highest potential by giving every fan a voice.

We ride the bleeding edge.

We resist a lifetime of conditioning about what the artist, performance, and audience experience should be, freeing us to realize music’s future. We reject the safety of the known for the bounty of the unknown.

Our machines have heart.

All of our work serves the highest purpose of making music IP and experience that evokes, inspires, and thrills. We know we’re succeeding when our content sparks new culture.

Scale is the grail.

We drive automation into every possible step of our production pipeline. We design for many virtual artists, diverse channels, a world of fans. We value simplicity over complexity.

Step one: the music.

To this day, everything we do starts and ends with the music. It’s our lifeblood after all. If we can’t make worthy music, then why bother?  We spent the better part of a year building a prototype generative audio engine in our Machine Arts Lab. We packed it with training data, instruments and effects. Incredible creators like Young Guru, Mike Shinoda, DECAP and Stlndrms joined us in the Lab. They infused the engine with their skill and sounds, but not with their compositions. The audio engine needed to stand on its own two feet, composing and producing every note of musical output (and the accompanying music metadata) without relying on human collaborators.

Young Guru in the studio working on the WVRPS mastering chain with WarpSound's audio director Steve Pardo.

It was on. We were making machine art. We gave members of our team a jacket that spelled “art” in the 1s and 0s of binary code.

Throughout 2019, we talked to a lot of people about generative content and why it was exciting. Most didn’t know (or care) about generative content. That was ok. It was a different time. We were going to keep making it, and we were going to make it.

When the dust settled on our prototype audio engine, we liked what we heard. We began to imagine different characters–a dreadlocked humanoid LED screen, a Tetris block-inspired robot, a semi-transparent ghost with audio-reactive guts. They started performing, dropping beats from their machine hearts and minds on Twitch. From this early cosmos, a triad of fully-formed generative artists emerged: a lofi-loving cyborg queen named Nayomi, an off-the-chain, half-Iguana trap fiend named DJ Dragoon, and a pint-sized AI bunny reprogrammed to shred the gnar named Gnar Heart. They started a collective called WarpSound.

The first iteration of the WarpSound collective.Can you spot DJ Dragoon, Nayomi & Gnar Heart?

In 2020, the WarpSound crew took to the virtual stage for the first time. Yes, they were performing artists. Like Sol System’s Stonehenge, they were recognizable but…different. These artists were interactive. They were co-creative. They didn’t want to play to their fans. They wanted to play with their fans. They were music creators and musical instruments at the same time–part performer, part sequencer, part drum machine, part synth.

Because they were producing new, original music live, the audience could make the music with them, morphing and mutating the performance in real time by voting to do things like “increase intensity” and “slime the beat.” We loved it, Nay, Gnar and Goon loved it, and most importantly, the WarpSound squad’s early fans loved it.

Others who shared our passion for music and culture’s bleeding edge started to pay attention. In early 2021, the storied Tribeca Festival booked the WarpSound crew to perform live, onstage, in the streets of New York City on a 42-foot LED screen. The 350-person audience mashed over 10,000 music-changing votes into their phones. Former DMC world champion turntablist A-Trak, Princess Nokia and breakout Tik Tokker Cookiee Kawaii joined for live guest collabs. WarpSound went boom.

Off of Tribeca, it was back to the lab. It was time to keep building. But a question that had been bubbling was now boiling over. The world was getting more “boring” by the day. Would we create on chain? Would we drop an NFT?

The WarpSound squad was down. First, they were literally born digital. They were OG metaverse natives. Second, they were limitless creators. Like supercharged sonic kaleidoscopes, their machine hearts could already generate infinite musical voices. Third, they wanted to show the world what they could do. Last, their DNA was co-creative. They needed more collaborators to make their performances great and help them grow. They needed a community that shared their passion for music, digital culture, the bleeding edge.

As stewards of the WarpSound ship, we were intrigued. More than intrigued. We were itching to find the right on-ramp to web3. But we wouldn’t do something for the sake of doing it. It had to start and end with music. We had to be 100% sure that we set up the WarpSound crew to make magic happen. But how exactly?

A number of formative influences merit a shoutout. Back in 2020 we started talking to Roneil Rumburg at web3 music instigator Audius. We loved what the team was building. Dating back to 2019, we were also collaborating with the alien digital creative intelligence known as Android Jones aka “The Goonfather.” Android had minted his first NFT, Dharma Dragon, in October 2020. A third friend, futurist and crypto creator, Matt Mason, also helped show us the way. In fact, back in January 2021, he was telling us this was the only way. Matt had helped us to launch the first version of WarpSound. We trusted Matt.

But no single person deserves more credit for making WVRPS happen than our friendly neighborhood web3 fire-breather and creative leader, WVRPS orchestrator Jeff Nicholas. I don’t have an ape or punk in my bag. I do have an Adam Bomb, because Jeff passed me one of his pre-reveal. It was my first NFT and I will HODL it to the end.

The Adam Bomb transferred to me by Jeff Nicholas.

Jeff said we should buy an ape and do something with it. We got MAYC #914. We gave #914 arms, legs, and a body. We put a generative musical heart in that body. The community gave #914 a name, GLiTCH, and the WarpSound line-up had its newest member: the first ape musician in the metaverse. The only generative ape musician in the metaverse.

Then Jeff pitched our team on a generative PFP project. This was the magic we were waiting for. Enter WVRPS.

***

If you’ve gotten this far, then you probably already know more about WVRPS than I will share here. You know that each WVRP is a fusion of generative visual art and original music composed and produced note by note by WarpSound’s audio engine. You realize that each WVRP is a unique communion of sight and sound–a singular answer to the question of what the source of a particular sonic texture, a musical vibe, might look like. You appreciate that WVRPS is the product of a creative collective. You understand that WVRPS and WarpSound are a labor of love and a new chapter in each contributor’s music love story.

Last, you believe that the WVRPS drop is just the beginning–the foundation for a greater vision for generative music creativity, social music, and human-machine collaboration. Each WVRP is a unique key that opens the door to a new world of connection with music and each other. Today and tomorrow, we are here in service to music, to art, to culture, to the future, to you. This is our promise. We hope you feel it.

LFG. 🚀

WVRPS by WarpSound.

https://warpsound.ai

https://discord.gg/warpsound

https://opensea.io/collection/wvrps-by-warpsound