Yang Xiao (肖洋)

I am a last year PhD student at Imagine supervised by Renaud Marlet. I also work closely with Mathieu Aubry and Vincent Lepetit.

My research interests are centered around Deep Learning, Computer Vision and Object Pose Estimation in particular.

I've spent time at CMM of Mines ParisTech and L2S of CentraleSupelec.

I received my master degree in Signal and Image Processing from University Paris-Sud and my bachelor degree in Optical and Electronic Information from Huazhong University of Science and Technology.
(Last update: May, 2021)

Email  /  CV  /  GitHub  /  Google Scholar  /  LinkedIn

profile photo
  • July 2020: Two papers (one spotlight and one poster) are accepted at ECCV 2020.
  • January 2020 : One paper is accepted at ICLR 2020.
  • July 2019 : One paper is accepted at BMVC 2019.
  • June 2019 : I attend CVPR 2019 in California!
  • December 2018 : One paper is accepted at ISBI 2019.
  • Octobre 2018 : I am starting my PhD in Paris!
  • Apr-Sep 2018 : Research internship at Mines ParisTech and L'Oréal.
PoseContrast: Class-Agnostic Object Viewpoint Estimation in the Wild with Pose-Aware Contrastive Learning
Yang Xiao, Yuming Du, Renaud Marlet
project page | code

We train a direct pose estimator in a class-agnostic way by sharing weights across all object classes, and we introduce a contrastive learning method that has three main ingredients: (i) the use of pre-trained, self-supervised, contrast-based features; (ii) pose-aware data augmentations; (iii) a pose-aware contrastive loss.

Learning to Better Segment Objects from Unseen Classes with Unlabeled Videos
Yuming Du Yang Xiao, Vincent Lepetit
project page

We explore the use of unlabeled videos to improve the performance of instance segmentation model on unseen classes.

Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild
Yang Xiao, Renaud Marlet
ECCV 2020
project page | code (detection) | code (viewpoint)

We popose a simple yet effective framework for few-shot object detection and few-shot viewpoint estimation that outperforms state-of-the-art methods on both tasks.

Pixel-Pair Occlusion Relationship Map (P2ORM): Formulation, Inference & Application
Xuchong Qiu Yang Xiao, Chaohui Wang, Renaud Marlet
ECCV 2020 Spotlight
project page | code

Sharper occlusion boundaries and better depth maps from color images.

Empirical Bayes Meta-Learning with Synthetic Gradients
Shell Xu Hu, Xi Shen, Yang Xiao, Pablo Moreno, Neil Lawrence, Guillaume Obozinski, Andreas Damianou
ICLR 2020

We revisit the hierarchical Bayes and empirical Bayes formulations for multi-task learning, which can naturally be applied to meta-learning.

Pose from Shape: Deep Pose Estimation for Arbitrary 3D Objects
Yang Xiao, Xuchong Qiu, Pierre-Alain Langlois, Mathieu Aubry, Renaud Marlet
BMVC 2019
project page | code

By using image and cad model at network input, and pose-aware data augmentation, we improve object pose estimation generalization ability towards arbitrary objects (seen / unseen).

A new color augmentation method for Deep Learning segmentation of histological images
Yang Xiao, Etienne Decencière, Santiago Velasco-Forero, Hélène Burdin, Thomas Bornschlögl, Françoise Bernerd, Emilie Warrick, Thérèse Baldeweck
ISBI 2019

Color transfer between training samples could improve performance in the segmentation of histological images of human skin while the labeled data is not sufficient.

Hand: Header-Assisted Network Decoding
Qiuyi Wang, Yang Xiao, Michel Kieffer, Cedric Adjih

Network decoding could be achieved by exploiting the structure imposed by the communication protocol on the packet headers.

cs188 Mathematics for Image Processing, Spring 2019 | 2021
Thesis Project
cs188 Machine vision for robotic manipulators

Initiated in 2018, DiXite is part of I-SITE FUTURE, a French initiative to answer the challenges of the sustainable city.

The DiXite project focuses on the construction site. It acts as a hub for developing pluridisciplinar research around construction scenarios in which automated and digitised processes are used to construct and maintain the city of tomorrow.

This website takes the template from here.