Featured post

நடிகர் அஜித் குமாரின் பிளாக்பஸ்டர் ஹிட் படமான ‘பில்லா

 *நடிகர் அஜித் குமாரின் பிளாக்பஸ்டர் ஹிட் படமான ‘பில்லா’ மே 1, 2024 அன்று மீண்டும் வெளியாகிறது!* ஸ்லீக் அண்ட் ஸ்டைலிஷ் தோற்றத்தில் திரையை அத...

Thursday 24 June 2021

OPPO takes home 12 awards at CVPR 2021 while the proprietary algorithm empowers
Smart Factory for the first time

   OPPO makes remarkable achievements in the competition with one first-place, seven second-place, 

and four third-place awards.

  From computational intelligence to human-centric intelligence, OPPO is exploring more 

advanced AI technology that can better understand and  assist people

Leading global smartphone brand OPPO recently took part in the premier annual computer vision event Computer Vision and Pattern Recognition Conference (CVPR) 2021. During the conference, OPPO's achievements in AI were recognized with its placing in seven major Challenges in 12 different contests in total. These included one first-place, seven second-place, and four third-place awards, demonstrating the company's industry-leading technological strengths and innovative breakthroughs in AI.

The team participating in the CVPR 2021 competition on behalf of OPPO came from the Intelligent Perception and Interaction Department and OPPO US Research Center of OPPO Research Institute. Through the optimization and training of AI algorithms, the teams’ work continues to strengthen OPPO's AI capabilities and the ability of its AI technology to better serve people.

Eric Guo, Chief Scientist, Intelligent Perception, OPPO, said, "We are very pleased to have achieved such remarkable results again in this year's CVPR Challenges, following our inaugural participation in CVPR 2020. Last year, we won first place in Perceptual Extreme Super-Resolution Challenge by demonstrating technology that can sharpen the appearance of blurry images and in Visual Localization for Handheld Devices Challenge that makes fusion positioning more precise. The Challenges won by OPPO this year, such as Multi-Agent Behavior, AVA-Kinetics, and 3D Face Reconstruction from Multiple 2D Images, cover more complex and advanced areas of computer vision, including behavior detection, localization of human actions in space and time, and facial detection. "

"These technologies can be used in a whole range of scenarios such as manufacturing, home, office, photography, health, and mobility," added Guo. "At OPPO, we are committed to making AI better serve people, providing users with more intelligent and convenient experiences."

Among its eleven honors, OPPO received three awards in the Multi-Agent Behavior Challenge, which assesses an AI model's ability to understand, define, and predict complex interactions between intelligent agents such as animals and human beings. OPPO ultimately won the first-place prize in the Learning New Behavior category, second-place in Classical Classification, and third-place in Annotation Style Transfer, distinguishing itself from over 240 other participants thanks to its leading AI capabilities. This same technology is currently playing an essential role in OPPO's factory, where the algorithms assist workers in reducing operational mistakes, particularly in key production steps, to ensure their own safety as well as the quality of products coming off the production line.

 From computational intelligence to human-centric intelligence, OPPO improves AI's ability to understand people

Through its mission of "Technology for Mankind, Kindness for the World," OPPO is building capabilities in human-centric AI. In the 3D Face Reconstruction from Multiple 2D Images Challenge, OPPO's self-developed AI algorithm was able to reconstruct 3D facial shapes with an error of around 1mm, leading it to take second place in the main index score ranking. OPPO's technology overcomes problems associated with unclear facial features, exaggerated expressions, and even damaged image data caused by real-life movements, especially in dynamic videos, to produce more accurate 3D facial models.

OPPO's self-developed facial detection algorithm is able to identify 635 key features points at a rate of 30 times per second. This technology will promote the evolution of portrait video technology, with 3D feature recognition making makeup effects and filters appear more lifelike and personalized. It will also allow for richer and more seamless AR filters on social platforms, allowing users to experience the cutting-edge technology through everyday moments.

AI that understands space and time

OPPO's AI capabilities have already developed to the stage where they are able to recognize human actions in space and time. In the SoccerNet Challenge, OPPO took second place in both the Action Spotting and Replay Grounding tasks. The purpose of the challenge was to evaluate the ability of the algorithms to identify more than a dozen key actions in a video of a soccer game, including offside and red card violations, which are usually difficult for humans to recognize due to the complexity of the rules and the subtle interpretations of them. To be effective, the AI algorithm also needs to account for other variables such as different camera angles, as well as accurately retrieve the timestamp of the action shown in a given replay shot within the original game. The future applications of this technology are wide-reaching and will help improve the experience for sports lovers through features such as automatically-generated match highlights. In a similar fashion, the technology can also be used to automatically create highlights of a user's life – for example weekly highlight clips – by analyzing the videos on their smartphone

In the MMact Challenge, OPPO took second place in both the Cross-Modal Action Recognition and Cross-Model Action Temporal Localization tasks. OPPO's powerful AI algorithm can accurately recognize more than ten types of actions in a video, such as talking, crouching, and walking, using only visual data. This technology is expected to be widely adopted in smart homes in the future, with benefits including the ability to better take care of the children, pets, the elderly, or other vulnerable groups at home. For example, the AI can alert parents in another room as soon as a baby or child exhibits actions that could be potentially dangerous.

OPPO also won third place in the AVA-Kinetics Challenge, which makes use of the industry's first dataset to include both space and time information. The Challenge's Positioning competition has long been one of the most popular competitions in the field of artificial intelligence, with competitors including those from top international technology companies and universities. The AVA-Kinetics algorithm can not only accurately identify the various behavior of people in the video, but also note their time and position. As a result, OPPO's AI technology not only understands what you are doing but also where and when you are doing it.

OPPO continues to explore the frontiers of AI technology

At this year's CVPR, OPPO also made a new milestone in the more cutting-edge academic challenges, including taking two third-place in LOVEU((Long-form Video Understanding) challenge. The LOVEU challenge requires AI technology to understand the content of a video and segment it into chunks without being given pre-defined categories. Given the huge possible variety of content, the challenge poses a significant test for the ability of AI algorithms to be applied to more generalized situations: The AI needs to think like a human being, understand colors, objects, human actions, and even light in the video, and make judgments on how these change over time. In the future, this technology has the potential to be widely used as the foundation for further AI tasks in video processing, such as facial detection and behavior recognition.

OPPO US Research Center participated in the Dense Depth for Autonomous Driving Challenge, demonstrating its technology that can output dense 3D depth information based on a 2D image. OPPO won second place in the Self-supervised track and took home "Novelty Award." This technology uses deep learning models to directly output depth information from regular images, and it may replace depth sensors such as ToF in the future to bring better indoor and outdoor navigation experiences.

No comments:

Post a Comment