Watching Opera at Your Own Ease—A Virtual Character Experience System for Intelligent Opera Facial Makeup

Peking Opera is a signature form of Chinese opera, an essential aspect of the country's rich cultural heritage with a deep historical significance. Contemporary efforts to safeguard and pass down this tradition, especially the art of Peking Opera facial makeup, are highly prioritized. With the advancement of technology, it has become a trend to integrate traditional facial makeup with modern virtual interactive design. The primary objective of this study is to create an immersive virtual character experience system by utilizing face capture technology, custom filters, and an interactive user interface. This project reveals a significant piece of Chinese culture in detail and offers users an engaging means to experience its charm through technology and entertainment. The research culminated in a comprehensive website housing a full range of virtual interactive experiences, focusing on the digitization and revitalization of Peking Opera culture and engagement with younger audiences.


INTRODUCTION
Peking Opera is one of the five prominent forms of Chinese opera, renowned for its captivating facial makeup.The designs capture the audience's attention at first glance, boasting vivid color contrasts, distinctive styles, and nuanced facial expressions.It serves as a vital gateway for the audience to immerse themselves in the tale and understand the complex roles of the characters [1].This paper explores the development of a virtual interactive experience system for Peking Opera facial makeup, aiming to enhance the sustenance and propagation of Peking Opera culture.

Research status
Currently, the primary methods employed for the digital preservation of Peking Opera facial makeup involve archiving these designs within cultural research institutions, displaying facial makeup images on websites, and developing digital resources [2].However, common issues persist in these approaches, where the designs appear as static patterns, void of dynamic interactions with audiences.Additionally, users often struggle to immerse themselves in the cultural context behind the facial makeup and fail to gain a comprehensive understanding of this essential piece of culture during their viewing experiences.
Dynamic face capture technology-extensively utilized in design, film, and television-enables a more intricate and sophisticated approach incorporating human actions and audience interaction [3,4].This technology requires high-quality data but yields more vibrant effects.In recent years, the convergence of face capture and interaction design has emerged as a prominent design trend, promising users a more immersive and engaging experience [5,6].
In a study conducted in 2021, a team of researchers harnessed 3D modeling techniques and object detection to craft personalized facial mappings for applying Peking Opera facial makeup filters.Their innovative approach facilitated swift and realistic processing by generating facial representations with depth cameras and implementing 3D differential deformation to establish personalized face models based on facial muscular structure and movement [7].
Although their depth-based facial model works well for its designated task, it is limiting in other aspects.The model relies on a base design with pre-defined control points and is less successful when the face is looking to the side due to its dependence on depth.Depth cameras are also less readily accessible than standard 2D cameras.Furthermore, this project intends to expand from facial

Research Objective
In this paper, we present a comprehensive virtual character experience system for intelligent opera facial makeup.The system is mainly comprised of three key components: The first part provides a systematic exploration of the cultural context and classification of Peking Opera facial makeup.This section equips readers with a foundational understanding of the composition and cultural significance of these distinctive designs.
The second part gives users the opportunity to take a short multiple-choice quiz.They then receive a relevant character and can virtually try on the selected character's facial makeup through face capture technology.
The third section grants users the freedom to select patterns for different facial regions and experiment with mix-and-match options.Throughout this process, customers can evaluate the realtime effects of their makeup design choices and interact with the screen using face capture technologies.

METHOD
The technical aspects of this project involve four major components: facial landmark detection, direct face mask mapping, realistic facial makeup mapping, and finally UI design.The primary libraries utilized for the project include NumPy, MediaPipe, OpenCV, Pillow, pandas, and Flask.

Facial Landmark Detection
For the purpose of facial detection and precise localization of essential facial landmarks, we harnessed the combined capabilities of Me-diaPipe and OpenCV.To further refine the precision and reliability of detection outcomes, each video frame undergoes a preprocessing stage encompassing the application of the Contrast Limited Adaptive Histogram Equalization (CLAHE) technique.This algorithm, characterized by its partitioning of the image into discrete segments followed by localized contrast manipulation, enhances the perceptibility of critical facial features.The algorithm selectively targets the luminance component of the frame, extracted through conversion to the LAB color space.Notably, through applying CLAHE, the adverse effects of glare are mitigated and issues stemming from suboptimal lighting conditions are ameliorated.
The MediaPipe library furnishes a comprehensive set of 478 facial landmarks, as shown in Figure 1.Through careful planning and evaluation, a subset of 83 points was curated to facilitate an optimized and computationally efficient landmark mapping.This selection predominantly encompasses the facial periphery and pivotal anatomical features encompassing the nasal, ocular, and oral regions.
The MediaPipe library's constraint to strictly central facial regions imposed the need for an algorithmic solution to ascertain the position of the hairline and subsequently extend the facial makeup to properly encompass the whole face.This objective is realized by extracting a directional vector derived from two landmark points that delineate the direction towards the hair region.Subsequently, employing image thresholding techniques, the precise location of the hairline is demarcated, thereby enabling the incremental adjustment of individual landmark points to achieve an optimal alignment.Supplementary post-processing measures are then enacted to ensure a proper curvature and the mitigation of potential anomalies.

Character and Mask
Matching.There are currently several databases that divide, organize and introduce the intricate patterns of Peking Opera facial makeup from various perspectives [8,9].In the selected database, 397 representative faces were classified based on the character, character traits, source, makeup design type, painting method, main colors, facial feature designs, and so on [10].The classification criteria and content of this database served as the foundation for the subsequent design.
Drawing inspiration from short quizzes, the character matching portion is designed to be a swift and engaging activity.This interactive process guides user through the selection of a facial mask that harmonizes with their identified attributes.A personality profile is discerned through the quiz questions, this archetype is then seamlessly correlated with the database, where masks are organized by features and characters that correspond to the various personalities.

Stiff Mask Mapping.
Following the selection of a mask for user, the process of the mask filter application is executed employing "stiff mask mapping." This choice stems from the intrinsic complexity introduced by the heterogeneous array of masks existing within the database, characterized by a multitude of sizes and disparate facial feature locations.The laborious nature of achieving a precise mask fit through manual landmark alignment (for all 397 masks) propelled the adoption of a more systematic and algorithmic approach, hence the instantiation of the aforementioned "stiff mapping." Initial preprocessing procedures were systematically enacted on the selected masks.These encompassed the application of a background removal protocol facilitated by a flood fill algorithm, which generated an alpha (opacity) layer based on surrounding white pixels.Furthermore, an automatic cropping operation was engaged to eliminate surplus spatial margins encompassing the facial mask.This process can be seen in Figure 2.
Next, drawing upon a constellation of 20 key landmarks, the facial mask is resized and rotated to harmonize with the facial contours.The proper placement is then calculated and image masks are created to create a satisfactory masking effect even without manual landmark definition.

Feature-based/Realistic Facial Makeup
Mapping.
In the context of feature-based facial makeup mapping, the dataset is partitioned into discrete components, which, when combined, form a cohesive composite facial makeup design.Demonstrating uniform geometries and facial structures, manual annotation of the design engenders an improved fit for the subsequent facial makeup mapping process.For facial mapping, first, facial landmarks are detected and subsequently refined through the application of optical flow, a technique for facilitating seamless motion interpolation and minimizing detection inaccuracies 1 .Next, the convex hull, or the outer contour encompassing the facial region, is identified.Employing Delaunay triangulation, the entire constellation of landmark points comprising the facial makeup as well as those derived from face detection are interconnected.Since the manually labelled landmark points on the makeup design match those identified on the face, a direct correlation is established, facilitating the individual mapping of each triangular region in the design onto corresponding facial triangles.This pivotal transformation forms the bedrock of feature-based mapping, orchestrating the contortion of the makeup to align harmoniously with distinctive facial attributes.The process is shown in Figure 3.
In the ultimate phase, the integration of image masking strategies is executed.This portion serves to recalibrate the makeup to accommodate user' eyes and mouth, thus yielding an immersive and accurate experience.

User Interface (UI) Design.
To seamlessly integrate our Python-based backend functionalities into a user interface (UI), we opted to construct a dynamic web platform using Flask framework, which was key in facilitating the integration of various programming languages, including Python, Jinja2, HTML, JavaScript (JS), and Cascading Style Sheets (CSS), thereby affording a robust foundation for customization and the incorporation of a multitude of features.The utilization of Bootstrap, combined with HTML for structural design, CSS for aesthetic styling, and JS for interactive automations, synergistically culminated in the development of an enriched and well-designed website.
Python was harnessed as the underlying backend framework, orchestrating the multifaceted tasks encompassing dynamic page generation, as well as intricate computer vision and image processing operations.The incorporation of Jinja2, in collaboration with Python and HTML, engendered a modular architectural framework, simplifying the expansion of various website components.This modular approach facilitates seamless scalability through simple augmentation of data structures within the primary Python script.This design paradigm conveys the inherent adaptability of the website, enabling the facile introduction of new elements and the facilitation of future adjustments as necessitated.The example of the UI's modular design is shown in Figure 4.

Peking Opera Facial Makeup Systematic
Reading.The first part of the background information page is comprised of useful context and information about Peking Opera supplemented by various visuals.When a user visits the website, they will see the vibrant home page shown in Figure 5 and can click the "Begin" button, or just scroll down, to see the first paragraph providing a quick introduction to Peking Opera.
To delve deeper into the cultural aspect and history, the user can go beyond the summaries by clicking the "Peking Opera Facial Makeup Details" button shown in Figure 6.This new page includes various sections discussing the various parts of facial makeup designs and their significance.The page leads off with the various base colors of Peking Opera makeup, detailing the basic meanings, representative characters, and repertoire of the seven main colors: red, black, blue, white, purple, yellow, and green.Next, the basic information, pattern division, and representative characters of various makeup design types-complete face, three-tile face, fragmented face, and others-are introduced.The final section is on painting methods, introducing the four primary methods of kneading, hooking, erasing, and breaking.

Intelligent Character Matching
Subsequently, the website guides the user to the character-matching system, as illustrated in Figure 7.The user is prompted to provide four key details: a nickname, gender, astrological sign, and blood type.Upon selecting "Generate, " the system employs a combination of the user's preferences and a stochastic algorithm to locate the most suitable facial makeup design from the database.Then, a concise description of the design's type, color palette, matching character, and corresponding personality traits is provided.Users can also virtually adorn themselves with the chosen design by clicking "Individualized Mask Experience." This system harnesses a user's horoscope, blood type, and facial characteristics to analyze and pinpoint an appropriate facial makeup design, considering its color scheme, design style, and creative method, all retrieved from the extensive database.

Virtual Interactive Experience
Users join the virtual interactive experience interface by clicking on the "Immersive Face Filter" button, which is shown in Figure 8.On the left side of the interface, the user can choose designs for six  different parts: head, eyebrows, eyes, nose, mouth, and base color.This can be done in any combination, allowing users to create their own design for facial makeup.There is also a button called the "Random Face Change", which randomly generates a facial makeup design.
During the process of selection, the user can observe the changes of the face in real time.As the user turns their face in all directions, the virtual facial makeup changes with the angle of the face through face capture technology and the filter application algorithm.Finally, by clicking on the "Screenshot" button, the photo can also be saved.

User Interaction
The user is engaged in an interesting character intelligent matching system and later the virtual interactive experience portion.This design gives users a deeper sense of engagement and allows them to understand the diversity and individuality of the faces more intuitively.

CONCLUSION AND FUTURE WORK
In contrast to prior research, this study emphasizes integrating face capture technology with interface design for a comprehensive virtual character experience system for Peking Opera facial makeup.Within the domains of User Experience (UX) and Human-Computer Interaction (HCI), contemporary design has increasingly embraced the exploration of the experiential facets of product usage alongside the design of interactive products [11].This research delves deeply into understanding user psychology and interaction requirements, opening a promising new chapter in the preservation of traditional culture through the lenses of human-computer interaction and digitization.
In the future, our plan involves refining the design of the website, expanding the development of related applets, and conducting field tests in public spaces like museums and art experience halls.Importantly, our endeavor is poised to uniquely enhance interpersonal relationships, contributing to the broader dissemination and exchange of cultural heritage.

Figure 1 :
Figure 1: The facial landmarks provided by the MediaPipe library.

Figure 2 :
Figure 2: The initial preprocessing procedures of the selected mask.

Figure 4 :
Figure 4: Example of the UI's modular design.

Figure 5 :
Figure 5: The interface of the Peking Opera facial makeup systematic reading.

Figure 6 :
Figure 6: The interface of the Peking Opera facial makeup detailed reading.

Figure 7 :
Figure 7: The choice and match result interface of the intelligent character matching.

Figure 8 :
Figure 8: The interface of the intelligent character matching.