Livepeer

Posted on May 21, 2024Read on Mirror.xyz

Introducing The Livepeer AI Subnet

The dawn of generative AI marks a tidal shift in video creation.

The generative video sector has been accelerating rapidly since Open AI's Sora demos showed what is possible when the barriers to video creation are reduced to entering text prompts into AI. The top open-source AI video model Stable Diffusion grew to more than 10 million users in just two months. But the promising growth of AI video tools faces a serious challenge. The $49 billion GPU market that powers generative AI is controlled by a few global internet monopolies including NVIDIA, Microsoft Azure and Amazon Web Services (AWS), driving up prices and creating a global AI compute bottleneck.

That’s why we are launching the Livepeer AI Subnet: the world’s first decentralized video processing network with AI compute capability. The Livepeer AI Subnet tackles the structural issues of centralized AI compute by leveraging Livepeer’s open network of thousands of GPUs to offer low-cost, high-performance processing. Building on the architecture of Livepeer’s pioneering decentralized video processing network, the subnet offers globally accessible and affordable open video infrastructure and incentivises limitless scalability with blockchain-based tokenomics.

So what is the AI Subnet? Let’s get started.

What is the Livepeer AI Subnet?

The AI Subnet is a forked branch of Livepeer’s video infrastructure network that provides a sandbox for the safe development and testing of new decentralized AI media processing marketplaces and tools.

This means that, while the wider Livepeer network will keep its core focus on video transcoding and compute for the $100+ billion streaming market, the Livepeer AI Subnet will serve a growing demand for AI compute capabilities. The Subnet is designed to handle any generative AI video or workflow improvement task such as upscaling, subtitling and recognition and, as development progresses, for anyone to run their own models that cater to specific video and media tasks.

The Subnet enables video developers to add a rapidly growing suite of generative AI features to their applications, such as text-to-image, image-to-image and image-to-video conversions.

Generative media prompts are also known as AI inference tasks and refer to the process of using a trained AI model to run, evaluate or analyze new data in order to complete a task. An example of this kind of inference task would be the process of inputting a descriptive text command into a model like Midjourney and receiving an image based on that command as the result.

An AI-generated output produced on Tsunameme.ai - the first demo app built on the Livepeer AI Subnet. This job used the text-to-image and image-to-video pipelines. Try to generate your own AI media using Livepeer on the beta at https://tsunameme.ai.

Livepeer’s AI network architecture is designed to organize distinct AI inference tasks into discrete job types. Each of these distinct task types is referred to as a pipeline for sending, receiving and returning job requests.  The Livepeer AI Subnet also allows Livepeer Orchestrator node operators to earn revenue in ETH and LPT by deploying their GPU resources for AI processing tasks.

The technical workflow of how tasks get processed on the AI Subnet. Gateway nodes pass tasks to orchestrators, who may be running multiple AI-Runner Docker containers of the same or different pipelines. Those pipelines may already have the requested models warm, or they may dynamically load them as needed.

While pipelines represent specific job types, like text-to-image or image-to-video, there are many different models that can be run within each pipeline to produce different results. Livepeer as a network, supports specific pipelines, while developers can choose which model they want to run within the given pipeline.

Currently, the focus is on Diffusion models developed using Huggingface's Diffusers library, but future updates will extend support to other model types. Diffusion models are a powerful class of generative model commonly used for generating high-quality images and audio. During this phase of the AI Subnet, Orchestrators are encouraged to keep at least one model per pipeline active (or “warm”) on their GPUs.

Click here to read more about the subnet’s featured tools and models.

How does the AI Subnet Work?

Livepeer uses a decentralized pay-per-task model. This decentralized market pricing enables developers to submit and pay for tasks on demand as opposed to needing to pre-reserve expensive compute capacity with cloud providers in centralized models. Developers can also set their own price that they’re willing to pay, based upon the performance that they need from the network and available supply.

This diagram illustrates how Livepeer allocates tasks to a distributed network of GPUs based on efficiency, instead of directing AI processing requests through a centralized server.

The two most important components in Livepeer’s AI network architecture are:

  • AI Orchestrator Nodes: These nodes handle the execution of AI tasks. They keep AI models “warm” on their GPUs for immediate processing and can dynamically load models as tasks arrive, optimizing both response time and resource utilization.

  • AI Gateway Nodes: These nodes manage the flow of tasks, directing them to the appropriate Orchestrator based on capability and current load, ensuring efficient task allocation and system scalability.

Application developers can add AI features to their apps by running their own AI Gateway Node and building against its API, or by accessing a remote hosted Gateway node service if they’d like to avoid hosting their own.

Infinite Scalability

Livepeer AI network infrastructure is designed to scale permissionlessly, enabling easy integration of additional Orchestrator and Gateway nodes as demand increases. It relies on a specialized ai-runner Docker image to execute AI models, which simplifies the deployment and enhances the scalability of new pipelines. Ongoing developments aim to enhance performance and broaden the container’s capabilities to support increasingly complex AI models and custom user-defined pipelines.

Why Build the Livepeer AI Subnet?

AI video tools lower the barriers to entry so that anyone can now create scenes that used to require a set, dedicated crew and hours of editing with a text command just a few words long. AI can also rapidly improve upscaling, frame interpolation, subtitle generation, along with many other key video production tasks. There is growing demand for this tech, but only a few global compute monopolies provide scalable infrastructure for it. Furthermore, the proliferation of these tools will only add to the global bottleneck of centralized AI compute.

In addition to the single point of failure risk inherent to highly centralized server networks, a trust and authenticity crisis is brewing with easily-generated AI content. Together, these factors represent significant risks to the sustainability of the AI video sector. Livepeer’s AI Subnet forges a pioneering path for the creation of a sustainable and profitable open AI video infrastructure by focusing on three core solutions to the problems listed above:

Globally Accessible, Ultra-Low Cost Infrastructure

Livepeer smashes the centralized stranglehold from providers like GCP and AWS that require you to rent, run and manage one of their GPU servers. The AI Subnet enables Livepeer to take the first steps in AI-enabling the thousands of GPUs already available on its transcoding network.

The cost-saving potential of this innovative way of providing access to AI compute is hard to overstate. Instead of managing and absorbing the cost of a dedicated server, AI video generation can simply be abstracted to either a single task or workflow. This is consequently submitted to the AI subnet on demand, powered by Livepeer’s renowned low-cost, high reliability network that already transcodes millions of minutes of traditional video every week with plenty of GPU capacity to spare.

Because AI startups are benefiting from considerable VC interest and investment, it’s easy for founders and funders alike to look past the sheer costs involved in generative video. But once that initial funding runs out or the markets take a turn for the worse, a highly-reliable, low-cost compute service is essential for AI video to be sustainable. Thanks to its impressive heritage in decentralized video compute, Livepeer is uniquely placed to provide this service.

Open and Permissionless AI Media Marketplaces

It’s undeniable that AI content will change video forever. But how, why, and in what way it will change is in danger of taking place behind closed doors.

Powerful, private companies control the most prominent AI models, many of which are closed-source. According to IOT analytics research, NVIDIA provides a staggering 92% of GPU compute used in AI data centers, while Microsoft and OpenAI scoop up 69% of the foundational models and platforms market. This centralized structuring of AI compute capability creates the risk of a single point of failure. If a company folds or is shut down by a government, all of its users go down with it.

The subnet embodies Livepeer’s commitment to open-source development that is censorship resistant and leverages blockchain and tokenomics to incentivize users to share their hardware, creating an infinitely scalable network of GPUs. Access to this fundamental technology should be open and available on demand to innovative builders, researchers, and startups, regardless of country of origin or the whims of a single corporation.

Content Verification and Authenticity

The dawn of the AI era has ushered in an authenticity crisis. Determining what is real and what is fake is a burden for consumers and a liability for platforms and creators alike. That’s why a sector-wide solution needs to be implemented, fast.

Livepeer has become the first decentralized AI infrastructure project to join the C2PA, an open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media. C2PA specs allow users to determine a number of assertions, such as the creator’s identity, creation tool and time of creation. Members of the C2PA include  ​​TikTok, Adobe, Google, Sony, Intel, BBC, Microsoft and OpenAI. Livepeer is proud to participate in the C2PA Technical Working Group and is working to bring open and decentralized principles to global standards setting for content provenance and authenticity.

Livepeer’s AI Subnet is currently developing measures to tackle fake content through native cryptographic signing that shows a clear trail of provenance.

The Livepeer AI Roadmap

The launch of the Livepeer AI Subnet marks a significant milestone towards Livepeer’s vision to infinitely scale decentralized video compute marketplaces on its open and permissionless network. The roadmap for AI Video compute on the Livepeer network is summarized below in 3 distinct development phases.

Phase 1: AI Subnet Design and Stability (complete)

The first phase of designing a Proof of Concept for the Subnet as well as initial onboarding of existing Livepeer Orchestrator node operators concluded on May 1. Benchmarking of Orchestrator Nodes was also completed to guarantee that network performance met the demands of demo applications and products. Over 20 high-performance AI Orchestrator nodes are already active. A retrospective for the stability phase is available here.

Phase 2: AI Subnet Optimization (in progress)

With the launch of the AI Subnet, we are now focused on enhancing the quality of service offered to AI Orchestrator and AI Gateway node operators. The primary goal for this phase is to enhance the network supply by expanding the range of compatible GPUs (low VRAM GPUs and server GPUs), reducing container load times, and handling edge cases. Efforts during this phase will also include refining the onboarding experience for application developers by working with a select number of design partners through our new AI Video Startup Program. These partners will provide invaluable user feedback on developer needs and requirements for using AI processing on Livepeer mainnet.

Phase 3: Livepeer Mainnet and AI Network Expansion (Q3 2024)

Following the optimization phase, Livepeer expects to launch the mainnet in Q3 2024, enabling a high-quality AI developer experience, complete with tools and software development kits. Network expansion will allow for the efficient execution of custom models and workflows, securely running custom container code, enabling flexible inference requests (cold or warm) to reduce costs for developers, and establishing a method to verify orchestrator authenticity and ensure content provenance.

Who can participate in the Livepeer AI Subnet?

Hardware Providers: contribute GPUs and earn fees

The AI Subnet unlocks new revenue streams for infrastructure providers across the Livepeer ecosystem:

  • Existing Livepeer Orchestrators can set up and run an AI Orchestrator node to perform text-to-image, image-to-image and image-to-video inference jobs today, adding a lucrative layer of fees to their existing transcoding earnings.

  • Become a Livepeer Orchestrator - set up your own Orchestrator node on the Livepeer network and start processing video transcoding and AI jobs.

  • Join an Orchestrator Pool and start earning rewards by providing consumer GPUs to the Livepeer network without having to run your own AI Orchestrator Node.

  • Become a Livepeer network partner - contribute compute at scale through a specialized partnership if you have server GPUs or run an existing compute network.

The Livepeer network is permissionless and open to all infrastructure providers. Livepeer documentation makes it easy for hardware providers to get started today on the Livepeer AI Subnet. You can also find an Orchestrator FAQ here. Complete this form to express your interest in getting support for supplying compute to the Livepeer AI Subnet.

Developers: bring models to the network as an AI-Worker

As the AI Subnet evolves, developers will be able to define and deploy custom pipelines and workflows, ensuring their applications remain at the forefront of AI and video technology.

Developers can also set up AI Gateways to test and refine their applications, with access to APIs for AI tasks.

The subnet is permissionless, so developers can experiment with the existing AI pipelines on the subnet right away, although the current phase of the Subnet is not suitable for production-ready applications. Alpha documentation can be viewed here.

For founders dedicated to decentralized AI and looking to build directly on the subnet at scale, Livepeer is launching the AI Video Startup Program. This is an invite-only program with a select group of 5-8 startups, innovating in the generative media space. Each startup will receive $40,000 USD in grants funding, including infrastructure credits and dedicated Livepeer engineering support. To check if you’re eligible, you can apply to join the program here.

Stay Up-to-Date

Today’s Livepeer AI Subnet launch marks an exciting milestone for the project, but it’s just the next step on Livepeer’s mission to provide the world’s open video infrastructure. As generative AI will lead to an order of magnitude increase in video content being created in the coming years, the Livepeer network aims to ensure that it has the capabilities to be the infrastructure that powers this wave of growth.

Introducing: Livepeer.AI

As part of the AI Subnet launch, we are also releasing Livepeer.ai - the home base for AI on the Livepeer Network.

Join the Community

Engage with the Livepeer community, get support, and provide feedback to help us refine and enhance the AI Subnet by joining the Livepeer Discord. The #ai-video channel is a great entry point for learning about and sharing ideas on Livepeer + AI. Follow our announcements for the latest updates, events, milestones, and opportunities to get involved.