How does the AIGC Chain work? What is our solution? In the next 10 years, AI technology will become an important part of people's digital lives, affecting all parts of life in the long run. AIGC (AI-Generated Content) Chain was founded on a simple mission - to make the AIGC ecosystem fair, accessible, and easy for all people around the world, regardless of races, genders, educations, nationalities, locations, languages, cultures, and religions. AIGC Chain is a distributed AIGC model infrastructure that anyone can use to train their own small AIGC models and add to the AIGC Chain infrastructure at their discretion. AIGC Chain is a public, decentralized layer 2 solution. It was built with Go and at first worked with Ethereum and iPollo. There are additional plans to expand to other networks like BSC, Cosmos, and the computation network like PAI. It aims to create an open, distributed and modular AIGC network and support storage public chains like Filecoin. AIGC Chain is crowd-trained and maintained by the AIGC ecosystem. AIGC Chain will support a Peer to Peer marketplace where users can sell distributed GPU services, data storage, or expertise to the AIGC communities without the need for a central authority or middleman. This means that participants can use their web3 identities, like a blockchain wallet address or a decentralized identity, to get into the marketplace and offer their services to the AIGC ecosystem. The marketplace allows participants to access and purchase various AI-related services and resources, such as distributed GPU services, data storage, and data labelling, from a variety of sellers. This helps promote fairness, efficiency, and performance in AI and supports the growth of the AIGC ecosystem. What problem are we solving? 1. Unfairness: AI should be fair, inclusive, accessible, and easy to use for all people, including marginalized and underrepresented groups, with the aim of promoting social good and benefiting all members of society. A centralized service platform may fail to meet the diverse needs and preferences of its users due to its dependence on the resources, decisions, and policies of a central authority. This can also lead to an exacerbation of social inequality through the use of centralized AI. On the other hand, a decentralized platform, such as AIGC Chain, that is crowd-trained by the community aims to cover a wider range of topics and needs. This is because it is not restricted by a central authority and is based on distributed networks and protocols, allowing users to have more control and ownership over their interests, data, and assets, thus enabling them to create and produce a wider range of content and services. 2. Inconsistent artistic style: AI-generated content solves problems by giving users and organizations new skills and insights that help them succeed. But the general diffusion models used by AIGC are designed to generate random and diverse outputs. The diffusion model uses a stochastic process, which means that it relies on randomness and probability to generate its outputs. This means that the output of the model can vary from one run to the next, even when using the same prompt input. Additionally, the diffusion model may be trained on a diverse and varied dataset. This further contributes to the randomness and inconsistency of the generated images. However, in real-world applications, such as content production, consistency is usually desired. AIGC Chain ’ s tech contributor develops an accuracy control for coherent visual generation techniques, resolving the inconsistency issues of diffusion-generated images. 3. Poverty: The COVID-19 pandemic has caused significant economic disruption and has pushed millions of people into poverty. The lack of access for minority, marginalized, and underrepresented groups to integrate into the AI ecosystem will exacerbate wealth inequality further. The use of AI Generative models and a peer-to-peer marketplace can help to elevate the unique cultures, stories, and skills of marginalized communities, which are valuable to humanity as a whole. A decentralized AIGC infrastructure would empower marginalized communities by allowing for greater participation and thereby reducing the negative impacts of the pandemic and mitigating wealth inequality. 4. High entropy and disorder of information: The second law of thermodynamics states that the entropy of the universe always increases. Therefore, to slow down the process of cosmic heat death, negative entropy is needed. In physics, negative entropy refers to a system that is more ordered than a state of equilibrium, rather than more disordered. In economic and social terms, this means that information should be ordered, not disordered, and that order is necessary for sustainable economic and social development. Web2 platforms use free access to exchange for user advertising time, but the ads are often ineffective and increase entropy, which is detrimental to the overall economic system. Blockchain technology reduces entropy by providing secure, transparent, tamper-resistant storage and transmission of information, increasing reliability and trust in the information stored on the network. NFTs are verified on blockchain networks, providing a secure way to authenticate ownership of the art works and reducing entropy by creating a clear record of ownership. Web3's decentralized architecture also reduces entropy by allowing users to choose what kind of advertising they receive. Overall, the decentralized AIGC service platform has the potential to give its users more fairness, freedom, innovation, and options than a centralized service platform. What is the final product? What is it used for? AIGC Chain is the backbone infrastructure that powers users' digital lives. It is a decentralized web service where participants can access the AIGC services and ecosystem by creating or logging in using their web2 or web3 identities. Users can use their existing web2 login credentials, such as a username and password, to access the service. Alternatively, users can create or use their web3 identities, such as a blockchain wallet, to access the service. decentralized web services give web3 users more control over their online identities and assets and more security and privacy. AIGC Chain also enables users to access and interact with decentralized AI applications (dApps), which are built on the web3 technology stack. AIGC Chain makes users' work more effective and efficient by automating tasks and content creation. AIGC Chain reduces the need for resource-intensive labor and wasted data. One of AIGC Chain services, like NFT creation, is based on conceptual input design. The model trained on specific conceptual art can generate NFT images that have a consistent artistic style and offer creative variations in terms of accessories, color, composition, background, and texture. This results in unique and interesting NFT images that maintain a consistent artistic style. AIGC Chain services automate repetitive and time-consuming tasks to provide efficient and consistent image generation. Another example is to use AIGC Chain to make fashion designs based on data like current inventory, customer preferences, and market conditions. By making the design process more efficient, this can help cut down on the need for time-consuming market research and data entry. AIGC Chain can also be used to make personalized promotions for each customer based on what they like. This makes designs more effective and relevant and reduces the amount of data wasted by targeting customers who are not relevant or interested. All users can improve their digital lives by saving time and money with AIGC Chain. Users can participate in the AIGC Chain ecosystem by creating and sharing content, training IP models, and providing computational resources such as GPU servers and data storage to the network. This allows users to have a more active role on the web while also benefiting from the collective knowledge and resources of the community. AIGC Chain creates a more open and accessible web where users have more control over their data, how it is used, and where they can choose to participate in the decentralized economy. Roadmap 1. In 2023, utilities are expected to be the main focus of the web3 world. Only when there are utilities that cannot be fulfilled in the w eb2 world, the unique privacy protection and data ownership offered by web3 will become a true necessity for people. AIGC Chain offers a true utility which is to make AIGC more fair and available to all people regardless of genders, races, cultures, religions, locations, education, and so on. AIGC Chain hopes to solve the problem of disorder of information for the entire world. The development of the crypto industry has proven that transparently collecting Gas Fee in asset transactions is a viable business model. We believe that users will pay for products that generate negative or lower entropy and higher value. By training AIGC models, information becomes more organized, and the entropy of information goes down. This is good for the long-term growth of the whole economy. Users pay Community Fees to use others' models, which reduces the entropy of their own information. The function of Community Fee is to make the AIGC ecosystem more orderly, and reduce entropy, and the community is working to maintain order and stability within the network rather than allowing it to become disordered and chaotic. 2. In the first phase, AIGC Chain focuses on AIGC utility and ecosystem developments. During the testnet, the training and the creation of contents by the models, as well as the user behaviors will be recorded on their own wallets, paving the way for the future ecology of the mainnet. To ensure security and consensus mechanism, a fork of the Ethereum main network was carried out to upgrade performance, increase block generation speed. 3. The use of an EVM-based virtual machine greatly simplifies the process of creating applications and porting ERC contracts for teams familiar with these types of contracts. 4. The governance token will run on the AIGC Chain. And the initial plan is for node members to be composed of both the AIGC contributors and users to ensure decentralized rights and the security of the network. 5. As more users, models and users of the models join AIGC Chain, more people are willing to use AI on AIGC Chain, which will attract more innovative developers. AIGC Chain Roadmap: 2022 Q1, Prototype AIGC, Q2, Alpha testing of AIGC Q3/Q4, Beta testing of AIGC 2023 Q1, Launch AIGC Chain testnet, onboard Metanaut NFT, one of the first series of NFT projects created by the AIGC Chain ecosystem. Start AIGC Chain ’ s ecosystem development. Community testing for “ Text to Video ” , and other skill engines from the avatar ecosystem DID/NFT, and keyword ranking algorithm. Q2, Community testing of distributed AI content generation Community testing of distributed model training for AI models Community testing of AI picture book and “ Personality Infused Avatars ” Launch utility token for the ecosystem Q3, Launch AIGC Chain mainnet Q4, Community alpha testing of text to 3D object generation Launch open-source code contribution platform for hosting and collaborating on AIGC algorithm development Reach a milestone of 20mm ecosystem users 2024 Launch the marketplace for GPU servers and data storage contributions Reach a milestone of 10000+ dApp on AIGC Chain ecosystem Reach a milestone of 50mm ecosystem users 2025 Reach a milestone of 100mm ecosystem users 2030 Become the leading distributed AI metaverse ecosystem with the strongest scalability, the widest coverage, and the largest ecosystem Who are AIGC Chain ’ s Contributors? Contributor of AI and web3 Convergence Cyrus Hodes leads the mission of AI and web3 convergence. Cyrus previously was a co-founder of Stability AI, one of the prominent open-source AIGC platforms. Cyrus is a member of OECD ’ s Expert Group on AI Compute & Climate, an expert at the Global Partnership on AI (GPAI), part of the AI for Climate Action and AI for Agriculture Committees. He is the co- c hair of Sustainability Commons, with IEEE ’ s Planet Positive 2030. Cyrus was the first Advisor to the AI Minister at the UAE Prime Minister ’ s Office. He holds a MPA from Harvard University, an MA in Industrial Dynamics from Paris II University and a BA from Sciences Po Paris. Base Code Contributor AIGC Technology Contributor: Oben is a tech company that makes and promotes AI tools and services to help people live digitally. It was founded in 2014 in Los Angeles by a group of researchers and entrepreneurs. The company's mission is to advance and promote the development of AI in a safe and responsible manner, with an emphasis on achieving positive outcomes for humanity. Oben does research and development on machine learning, speech morphing, and natural language processing. It has also made a number of AI-powered products and services, such as AIGC Chain, a set of foundational AI Generated Content (AIGC). With these tools, users can make 3D avatars that look like real people and create content to encourage innovation and improve logistics by making it easier for them to bring their ideas to life in the most efficient way. Adam Zheng is a Co-founder of Oben and previously he was a venture partner at Lightspeed Venture Partners. Adam and his classmates in UC Berkeley and Tsinghua co-founded Baihe.com, one of the largest Internet dating sites in China. Adam received a PhD in Transportation from UC Davis and MFE in UC Berkeley. DETAIL SECTIONS A diffusion model is now required to achieve state-of-the-art content generation. The diffusion models excel beyond the previous Generative Adversarial Networks (GANs) primarily in two aspects: (1) It needs no adversarial part in training, which could be indeterministic in training results; (2) It can have a much larger scale of parameters than GANs. The vast increase in parameter scale determines the improved quality and higher resolution of the generation results. Training a diffusion model typically involves using a large dataset of images and their corresponding text descriptions as input. This dataset typically consists of several hundred million to several billion images with a resolution higher than 512x512, along with the text embeddings that provide additional information about the images. Diffusion models have the ability to incorporate external information as knowledge into the generation process, resulting in more realistic and diverse content. This ability is especially useful in applications where the content that is generated needs to follow certain rules or constraints, such as when it is used to make natural language or images with certain attributes. An NLP model like CLIP can be used as a conditioning module to pull out information from text or images that makes sense in the given context. The extracted features can be used to direct the generation process, ensuring that the generated content contains the desired attributes and characteristics derived from the inputs. Diffusion models are hard to train because it takes a lot of GPU resources to get the results you want. This can be a significant cost for companies and organizations when developing and using these models. This is especially difficult for startups with access to fewer resources. For example, training a model requires a lot of powerful GPU instances (like several racks of NVIDIA A100) to work together. This is not only an expensive constraint, but it can also take several weeks. Using a diffusion model in production costs money every time it is used. This is on top of the initial cost of training the model. For example, the model inference process, which is used to generate content from the trained model, needs computational resources for high-resolution or complex content. Also, the inferring step can be done with a single GPU, such as an NVIDIA A100, V100, or 3090, and it is much cheaper than the training step. However, the costs can still add up given the high demand for such applications. To reduce the computation cost of diffusion models, our team has developed proprietary algorithms and techniques to optimize the model training and inference processes. Parallelization of the model training process is one of these techniques. Techniques like pruning and quantization can be used to reduce the number of parameters and the amount of computing power needed. Illustration of AIGC Chain Technology Framework Unique architecture modules include: 1. The forward diffusion process: Our raw-feature based autoencoder uses a raw-feature initialization mechanism, which has been proven to be more effective than red-green-blue (or RGB-based) diffusion methods. This raw-feature initialization allows the model to capture more detailed and nuanced information about the input data, resulting in higher-quality generation results. 2. The reverse diffusion process: We use a low-rank approximation denoising method to achieve dimension reduction in the latent space for the large matrix's semantic features. This low-rank approximation method significantly reduces the computation cost compared to other methods such as stable diffusion. This allows the model to be more efficient and cost-effective. 3. Accuracy control for coherent visual generation: For better controllability and consistency of the generated contents (especially human characters), we provide an integrated solution that includes our efforts in diffusion models and experience in past video/3D avatar/multiview stereo/segmentation projects. Besides modifying the diffusion methods, the autoencoder, and inserting new CLIP, a variety of proprietary algorithms are coded to provide extra conditioning beyond plain images and texts. This yields results that are not only visually appealing and diverse, but coherent and consistent with real-world perception. Technology comparison with mainstream Diffusion models As the results of our experiments show, our diffusion model has the following features: 1. Consistently high-quality images are generated. This includes factors such as resolution, realism, and visual fidelity. High-quality images are essential for ensuring that the diffusion model is capable of generating realistic and visually appealing content. We have invited many industry clients ( i.e.. Art Center College of Design) to test and evaluate the quality of our image generation. We have the same quality as market leaders like Dalle2, MJ, and Stable Diffusion. It is better than other models depending on how it is used and what criteria are used to compare it. Please see the images generated using the exact same prompts on the left for a comparison. (Left: AIGC Chain; Right: Stability AI). 2. High relevance of the generated images to the prompt. This involves evaluating how well the model is able to interpret and respond to the input prompt, and how well the generated images align with the intended subject matter or theme of the prompt. The ability to generate relevant and faithful images from the input prompt is largely dependent on the design of the CLIP model, which is used to convert the input text or image into an embedding. The CLIP model is trained on a large corpus of data and learns to map words and images to a common latent space. This allows it to effectively capture the semantic relationships between words and images. Our model is equipped with native English and Chinese CLIPs, which allow it to generate high-quality content that accurately reflects the input prompt in both languages. 3. Faster model training speed. To train and fine tune the same image set, a market comparable model needs 1 hour with 4 V100 GPUs, while AIGC Chain only needs 0.6 hour with 1 V100 GPU. 4. Faster content generating speed. Our diffusion model has significantly faster generating speed compared to existing methods. This allows us to generate high-resolution images in a shorter amount of time, making it a more practical and efficient tool for image generation. Using V100 GPU, to generate the same resolution image, our generating time is between 10% and 50% of market comparable models. 5. Consistent artistic style while allowing for creative variation. This means that the artwork produced within a style will have a cohesive and recognizable aesthetic, while leaving room for individual creativity and variation. For example, a consistent artistic style could include specific color palettes, composition techniques, or subject matter, while leaving allowance for individual expression and variation within these constraints. The use of this technology is essential for the creation of non-fungible tokens (NFTs), as it provides a cohesive and recognizable body of work that can be easily identified and valued by collectors. This approach allows for artistic freedom and creativity while ensuring that the artwork is desirable to collectors. For example, in the below NFT development, AIGC Chain was used to train a small model for the project. Once the training was done, the small model became the virtual artist, which could be used to make a set of NFT images for the project in a style that was easy to recognize and consistent. With AIGC Chain models, users can quickly and easily make a series of high-quality images that all have the same artistic style. This allows for the creation of an engaging visual narrative with well-defined characters and settings for a picture book. In the demo below, AIGC Chain made a series of images automatically, with each image matching the text story. 6. Flexibility, allowing it to be adapted for different vertical industries and IPs. For example, our framework can be customized for the fashion industry, allowing merchandisers and designers to quickly and efficiently generate a wide range of high-quality images for clothing and accessories to be used in prototypes for manufacturing. Our model also has the ability to capture market intelligence and learn from past data and trends. This allows it to generate designs that are tailored to the specific market and audience of the fashion brand, ensuring that the generated designs are relevant and appealing to customers. Explanation of the use case for the entire infrastructure. How does the ecosystem function? A NFT project usually needs to spend between $100k and $150k and requires 4-5 employees, while it takes one month to develop a 10,000-image NFT project. By using AIGC Chain, our NFT client can just upload a few images to give our AIGC models ideas for concept designs. Our models will then generate a few candidate images based on the inspiration from the samples for the client to review and select. After the client chooses their favorite image, it will be used as the original concept image to train a dedicated model for this NFT project. It takes between half an hour and a few hours for AIGC Chain to train this model, depending on the server's bandwidth. After training, the NFT's generation model is ready to produce NFTs for the client. Each image will have unique artistic variations or traits while maintaining a consistent style, so users can identify these generated NFTs as being part of the same collection. The entire workflow takes one person between 4 and 6 days, and the cost is around 10-15% of the original cost. Diffusion models are not only cost effective and creative. The trained model for this NFT project becomes the project's brain, which powers the future creation of digital content for the community. Using our model, the project team and the community can make NFTs, picture books, animations, and other types of content. This content assists when telling stories and visualizing the NFT metaverse. Users can register their own NFTs on AIGC Chain by connecting their NFTs in the wallet and training them. This training incurs a fee, which is split between the servers and storage providers (70%) and a community fee (30%). Once the training is complete, the NFT models are recognized as intellectual property on the platform and can be used to generate content and potentially generate revenue for the owner. Politicians are public figures in the demo above, and their small models are usually pre-trained and registered on AIGC Chain by their fans. Because the models have already been registered, any user can use them to create a variety of content for politicians. Incentive models. Why are users deciding to build here? Community Fee The community provides all of the resources that are needed to train AIGC models and generate text or images, such as algorithms, models, GPU power, data labelling, data storage, etc. Users of the Chain can select the resources they need and directly pay the community fees to the contributors through smart contracts. 70% of the community fees go directly into the contributor's wallet, and 30% go into the AIGC Chain's treasury fund for ecosystem development. Transaction Gas Fee Gas Fee is a fee paid to the miners of the AIGC Chain. When a user makes a transfer on the blockchain, the miner (formerly known as a node) needs to package and record the transaction on the AIGC Chain block for it to be completed. This process consumes the node's computational resources and therefore incurs a miner fee. In EVM-compatible chains, the fee is determined by the Gas Price (unit price) and the consumed Gas Limit (quantity). The calculation formula is as follows: Gas fee = Gas Limit * Gas Price The Gas Limit quantity is primarily affected by the complexity of the operations in the smart contract. The more operations, the higher the Gas Limit. The Gas Price is set by the initiator and the higher the price set by the initiator, the faster their transaction will be packaged by the nodes/miners. Gas fee is the transaction fee paid to the validators of AIGC Chain and cannot be refunded. 1. Domain Knowledge Contribution: Even though the diffusion process is one of the most advanced and useful AI models, one of the biggest problems is that centralised platforms can't cover all areas of human knowledge. This makes it difficult to create and train models on specific subjects, which can limit the diversity and creativity of the generated content. To solve this problem, we are looking for help with building blocks for creating and training models on any specific topics that the community is interested in. By working together, AIGC Chain covers all of human knowledge. This lets it make very accurate and useful content in a lot of different fields. In the images below, AIGC Chain's base model already knows a little bit about cats in general, but it doesn't know about Persian cats or Garfield cats. The user can register Garfield Cat as a keyword on the network, and train a small model for Garfield Cat by uploading relevant images. AIGC is the latest technology focused on disrupting the traditional means of producing images, text, videos, 3D objects, and so on. When users use AIGC, they are required to give commands by entering keywords or subject names that are linked to the related trained models. These keywords or subject names are used to find the right models that will be used to make the content, which is then made by running the models on a GPU server. The generated content is pushed to the user. As in any other business, users who make content pay the owners who provide keywords, servers, and storage and keep them running. AIGC Chain is also an infrastructure that facilitates this supply and demand. As AIGC becomes a bigger part of people's digital lives, AIGC Chain will be able to handle more transactions. Training models with unique keywords is similar to registering a website, where the keywords are assigned on a first-come, first-served basis. This provides an opportunity for those who are most in need and have a high level of business sensitivity to explore the advantages of this opportunity. Of course, even if a keyword for a subject has already been registered, participants can still register similar keywords by adding an extension to them. As long as the models are trained with the most relevant data, the highest-rated keywords will rank higher and be more valuable for users to use. AIGC Chain chooses which small models to recommend based on a number of factors, such as user engagement and relevance. In the future, users may be able to create content without typing specific keywords, thanks to natural language processing. The platform will have a drop down menu under the same key word with a ranking system, where users can select the most relevant and highly-ranked models. 2. GPU and Storage Contribution: The use of a GPU server and data storage is essential for a diffusion model to store the generated content so that it can be accessed and used by the model. AIGC Chain has already set up a number of GPU servers and data resources to support its services during the pilot and alpha phases. By whitelisting certain servers and storage providers, AIGC Chain can ensure that its infrastructure is stable and reliable during these early stages of development. AIGC Chain can gradually add more servers and storage as the infrastructure matures and gets stronger. Eventually, this allows for the removal of the whitelist to increase flexibility and scalability. The server and storage providers are essential to the success of AIGC Chain. They will be rewarded with community fees for their contributions. AIGC Chain has put in place a solid infrastructure and a list of GPU suppliers to support its growth and development. AIGC Chain also wants to give rewards to people and partners who share their knowledge, data, servers, and storage space when the market allows. Through these rewards, AIGC Chain hopes to incentivize users to contribute valuable resources to the digital world for the purpose of developing AI technologies that are driven by and for the users. This method will make it easier for people to work together and take part by making sure that AIGC Chain has access to the data and resources it needs to keep growing and improving its services. By giving rewards, AIGC Chain will become a distributed infrastructure that aligns the interests of its users and partners to build a thriving ecosystem that powers users' digital lives. Users can, for example, register a new keyword and train this keyword model by uploading images that are related. Or users can contribute data or do the data labeling work for a GPT model. Once the work is done and validated by the community, users will receive the token rewards. 2D Capabilities 1. 2D Profile Picture (PFP), Decentralized ID (DID) and NFT: PFP refers to an image or photograph that represents a person or entity on a social media platform, website, or other online profile. PFPs are often used to help users identify and connect with each other, and they can also serve as a visual representation of a person's personality, interests, or values. PFPs are a common part of many online platforms because they help users personalize their profiles to show who they are and what they like. DID is a type of digital identity that can only be used by the person or organization that owns it. This avoids control from a central authority or third party. DIDs can enable users to have more control over their own digital identity and the data associated with it. This critical feature allows one to protect their privacy, security, and autonomy. For example, consider a person who wants to access a secure online service. With a DID, a person can make and manage their own digital identity to prove they are who they say they are and get into the service. This gives the user the right to keep control of their own identity and the information connected to it. The service provider does not have to store or manage the user ’ s sensitive personal information. The combination of PFP NFT with DID (decentralized ID) can provide a more personalized and user-friendly way to manage and control digital identities. By using a PFP NFT as part of a DID, users can create a visual representation of their digital identity that is unique and recognizable. The ability to differentiate themselves from others online is uniquely important. This simplifies the means for users to interact with each other and verify another party ’ s identity. Previously, the use of NFTs was limited to creating unique and verifiable profile pictures (PFPs) for users. With the advent of AIGC technology, the potential uses of NFTs have expanded significantly. Users can use their DID accounts to make any kind of image based on their creative ideas and register their creations as intellectual property (IP) rights. This lets users make and take care of their own digital assets, which they can use to make new PFP, digital art, collectibles, or other digital items. AIGC Chain gives these digital assets the infrastructure and tools they need to be made and managed, with the goal of making it easier for a thriving ecosystem to grow. AIGC Chain has an easy-to-use interface that lets owners of PFP NFTs create new PFP NFTs quickly and easily with the help of AIGC knowledge models that have already been trained. Users must pay a community fee to the owner of the knowledge models and the server and storage providers who make the network operations possible in order to use this service. Participants can also use their decentralized ID (DID) accounts to make new content from non-NFT text or image inputs. The new images can still be registered as NFTs in order to protect the rights under their DIDs. 2. Content Registration for Better Generation Quality: As explained in the previous section, AIGC Chain has a unique algorithm that allows users to generate consistent artistic style images with creative variation for the purpose of creating a higher-quality image or piece of content. In doing so, users need to register and train their images or contents on AIGC Chain by creating a unique keyword name for the image or content to be uploaded. With this keyword name, owners and other people in the ecosystem will be able to easily get to the keywords with the linked small model to create new content. Users can create, share, and use trained models that are specific to their interests and needs. They can also contribute their own data and knowledge to the ecosystem to improve the accuracy and diversity of the generated content. With the ability to generate high-quality, consistent, and unique content, AIGC Chain can help enable new and exciting uses for NFTs and other digital assets. The NFT image below was generated after the original concept image was registered and trained on the AIGC Chain network. When a user wants to create high-quality and high-resolution content, they usually prefer to register and train a small model. After the image or content is registered on AIGC Chain, the owner can apply a variety of skills provided by the ecosystem. Some of the current skills available on AIGC Chain include: text to image, image to image, addition/removal of subjects in an image, and merging styles for new creations. More skills such as chat, personality definition, text to video, and text to 3D objects will be added to the platform upon integration. AIGC Chain encourages community involvement in the development process by offering opportunities for testing and collaboration along with governance proposals to vote and suggest new skills for integration. 3. 2D Utilities: Seed as NFT The seed of CLIP is a random string of numbers or code used to initialize the model and uniquely determine the noise of the diffusion. The seed in CLIP is a unique value that determines the output of the model when given a specific input. It is similar to a "starting point" or a "starting condition" that influences the generation process. When generating images or contents, users are producing seeds. Different seeds can produce different results. Quality seeds can be considered valuable assets. Users may save and register their seeds as NFTs under their decentralized ID (DID) on the AIGC Chain platform. This allows them to easily manage and verify the ownership of their seeds and potentially trade or sell them in online marketplaces. By using different seeds, users can generate a wide range of results to save and share to produce the best results. The use of seeds in CLIP is similar to the use of seeds in other generative models as they help ensure the model ’ s capability to produce a variety of outputs without overfitting to a particular input. 4. Editing Skill Regenerating by wiping: Using the wiping constraint, participants can select an area in the input image to conceal from CLIP's "translation." The wiping constraint is in a black and white format where the black areas indicate which parts of the image constraint will be concealed while the white areas will be redrawn in the resulting image. There are also three types of redrawing. The first uses the corresponding area of the image constraint to draw content similar to the input image area (using the original image as a constraint, very similar). The second option is to only use the color matching of the corresponding area of the image constraint as a constraint (redrawing, color matching is similar). The third option is to use a new constraint or random noise from the model as a constraint to draw content that is not as similar to the input image area. The demo below shows an example of regenerating facial images of an Asian guy from a Caucasian guy by using a new constraint. Removing by wiping: Users can also remove the subject in the wiped areas. By combining regeneration and removal, users can add or remove subjects in an image where they see fit. Merging subjects for new creation: By setting weights, it is possible for two text constraints to simultaneously affect the result. For example, if "cow" and "rabbit" are fused as text constraints, it will generate a new creature that features a rabbit ’ s head on a cows ’ body or vice versa. With the ability to control the weights of the text constraints, users can fine-tune the results of their creations to achieve the desired effects and outcomes. Combining artistic styles to create something new: After the images are "translated" by CLIP, they can be added together with another artistic model according to different weights. This ultimately makes the generated result have the same features from both styles. For example, the top two images are samples from Metanaut, one of the NFTs in the AIGC Chain ecosystem. The bottom images are demo images generated by merging the Metanaut NFTs and one Japanese artistic model. Upgrading the resolution or expanding the image to a wider or larger size : By using the AIGC model, users can improve the resolution of an image or expand its size by setting the appropriate parameters. Fashion Design: AIGC Chain can be used by e-commerce shops to generate updated designs. This can be done by first training the model on the data submitted by the communities. The first demo above shows that the new designs are generated based on the uploaded current design and market intelligence. The second demo shows that the new designs are generated by merging the two uploaded current designs and market intelligence. Personalized Ads and product design: Users can create personalized images for advertising by incorporating market intelligence data. This can be achieved through the use of AIGC models, which allow users to create unique and customizable images by combining different elements from market trends and styles. Text to Video: Text to video generation is an active area of research in artificial intelligence. AIGC Chain models generate a sequence of images that correspond to the words in the text input. The generated images are then combined into a video by arranging them in a sequence to play back in ra