Skip to content Skip to navigation
See our Campus Ready site for most up to date information about the fall semester.Campus ReadyCOVID Help

2019

Fall 2019

Aug. 30

Managing and Analyzing Simulation Data
Fabio Porto, LNCC, Petropolis, Rio de Janeiro, Brazil
Host: Florin Rusu

Abstract

Awareness of fluctuating levels of cognitive performance measures will support better management of tasks, allow for the development of new adaptable user interfaces informed by cognitive states, and will potentially serve long term health of users by avoiding frustration resulting from mismatched task demand and available cognitive performance. Technology pervasively surrounds us and enables a virtually uninterrupted information retrieval and distribution, resulting in a constant communication between people and computers. One of the key functions of a computer is to support its user and react to input with the response expected or desired by the users, requiring an understanding of context. By using explicit and implicit input modalities we can increase the information density and allow computers to better interpret the user's context, making them context-aware. This research has mainly used consumer products, such as eyewear equipped with sensors for measuring eye movement, and infrared cameras and sensors to obtain measurements of changing facial temperature. We are showing that available solutions enable us to infer states of alertness, sustained attention, and cognitive workload. The concepts, results, and tools detailed in this research, can help professionals, researchers and students to gain insights into the potential of context-aware systems, here in particular cognition-aware systems.

Biography

Benjamin Tag is a Ph.D. candidate and researcher at the Graduate School of Media Design at Keio University. His research interest is located in the fields of Human-Computer Interaction, with special focus on Cognition-Aware Systems. He is investigating ways to understand human cognition by combining methods from the fields of cognitive psychology and pervasive computing. Specifically, he is interested in using ubiquitous technologies to augment the process of knowledge acquisition through implementing proactive recommender and intervention systems.

Sept. 6

Push the Limits of Wireless Connectivity for IoT Devices
Longfei Shangguan, Microsoft Cloud and AI
Host: Wan Du

Sept. 13

Structure as a Sensor for Indirect Occupant Monitoring through Structural Vibrations
Mostafa Mirshekari, Carnegie Mellon University
Host: Shijia Pan

Abstract

TBD

Biography

TBD

Sept. 20

Scale- and Context-aware Convolutional Neural Networks for Non-Intrusive Load Monitoring
Yu Zhang, UC Santa Cruz
Host: Wan Du

Abstract

TBD

Biography

TBD

Sept 27

Exploration, Mapping, and Navigation of Mobile Indoor Robots Under Motion and Sensing Uncertainties
Jose Luis Susa Rincon, UC Merced
Host: Stefano Carpin

Abstract

A common task for users of desktop or mobile computers is the input of text. Whether preparing a report or 'texting' a friend about where to have lunch, we can't avoid this ubiquitous computing task. This talk will explore analytic methods of characterizing and comparing text input methods. We are interested in quantifying the work invested to enter text. For the Qwerty keyboard, entering "hello" takes five keystrokes. If input uses a soft keyboard on a smartphone combined with word completion, fewer keystrokes, or finger taps, are required. A special case is an ambiguous keyboard: fewer keys, >1 letter per key. The phone keypad places 26 letters on just 8 keys. But, what about other keyboards with 26 letters on, say, 7 keys, or 6 keys, or 5 keys, or ... How about text entry with just one key? This talk will present, compare, and quantify the text input process for a variety of keyboards, some with as few a 1 key.

Biography

Scott MacKenzie's research is in human-computer interaction with an emphasis on human performance measurement and modeling, experimental methods and evaluation, interaction devices and techniques, text entry, touch-based input, language modeling, accessible computing, gaming, and mobile computing. He has more than 170 peer-reviewed publications in the field of Human-Computer Interaction (including more than 30 from the ACM's annual SIGCHI conference) and has given numerous invited talks over the past 25 years. In 2015, he was elected into the ACM SIGCHI Academy. That same year he was the recipient of the Canadian Human-Computer Communication Society's (CHCCS) Achievement Award. Since 1999, he has been Associate Professor of Computer Science and Engineering at York University, Canada.

Oct. 4

Abstract

TBD

Biography

TBD

Oct. 11

Oct.18

Mathematical Modeling with Heavy-Tail Distributions
Sabir Umarov, University of New Haven
Host: YangQuan Chen

Abstract

TBD

Biography

TBD

Oct. 25

Abstract
TBD
Biography
TBD

Nov. 1 (There is no seminar scheduled today.)

Nov. 8

Abstract
Visual synthesis is the process of creating new data through manipulating, editing or re-organizing existing data. However, attempts from non-experts often end up deviating from the manifold of real natural data, leading to unrealistic results with undesired artifacts. The goal of my research is to develop effective computational models to facilitate more realistic and stunning creations, which will bring brand new user experiences and transform the ways we communicate and collaborate. Along this direction, I have explored four different topics, including image enhancement, completion, stylization and video prediction. In this talk, I will mainly introduce the background and achievements of one topic, i.e., image/video stylization, which focuses on recomposing an image with new styles. Such a technique is not only useful for novel designs and creations, but also is an important step towards the understanding of factors that constitute images.
Biography
Yijun Li (https://sites.google.com/site/yijunlimaverick/) is a Ph.D. student in Vision and Learning Lab at University of California Merced, working with Prof. Ming-Hsuan Yang. His research interests lie in the areas of computer vision, computational photography, and machine learning. He previously received his M.S. degree from Shanghai Jiao Tong University and B.S. degree from Zhejiang University.
Abstract
Despite the long history of image and video stitching research, existing academic and commercial solutions still produce strong artifacts. In this work, we propose a wide-baseline video stitching algorithm that is temporally stable and tolerant to strong parallax. Our key insight is that stitching can be cast as a problem of learning a smooth spatial interpolation between the input videos. To solve this problem, inspired by pushbroom cameras, we introduce a fast pushbroom interpolation layer and propose a novel pushbroom stitching network, which learns a dense flow field to smoothly align the multiple input videos with spatial interpolation. Our approach outperforms the state-of-the-art by a significant margin, as we show with a user study, and has immediate applications in many areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.
Biography
Wei-Sheng Lai (http://graduatestudents.ucmerced.edu/wlai24/) is a Ph.D. candidate of Electrical Engineering and Computer Science at the University of California Merced, under the advisement of Prof. Ming-Hsuan Yang. He received the B.S. and M.S. degree in Electrical Engineering from the National Taiwan University, Taipei, Taiwan, in 2012 and 2014, respectively. His research interests include computer vision, computational photography, and machine learning.

Nov. 15

Towards Energy‐Fairness in LoRa Networks
Weifeng Gao, UC Merced
Host: Wan Du
Abstract
TBD
Biography
TBD

Nov. 22

Safe Motion Planning under Uncertainty for Mobile Manipulators in Unknown Environments
Vinay Pilania, Mercedes-Benz REsearch & Development
Host: YangQuan Chen
Abstract
TBD
Biography
TBD

Dec. 6

Dec. 13

 

________________________________________________________________________________________________

Spring 2019

Jan. 25

Physiological Sensing on the Face for inferring Cognitive States Benjamin Tag Ph.D. Candidate, Graduate Shcool of Media Design, Keio University, Japan
Host: Dr. Ahmed Arif

Abstract

Awareness of fluctuating levels of cognitive performance measures will support better management of tasks, allow for the development of new adaptable user interfaces informed by cognitive states, and will potentially serve long term health of users by avoiding frustration resulting from mismatched task demand and available cognitive performance. Technology pervasively surrounds us and enables a virtually uninterrupted information retrieval and distribution, resulting in a constant communication between people and computers. One of the key functions of a computer is to support its user and react to input with the response expected or desired by the users, requiring an understanding of context. By using explicit and implicit input modalities we can increase the information density and allow computers to better interpret the user's context, making them context-aware. This research has mainly used consumer products, such as eyewear equipped with sensors for measuring eye movement, and infrared cameras and sensors to obtain measurements of changing facial temperature. We are showing that available solutions enable us to infer states of alertness, sustained attention, and cognitive workload. The concepts, results, and tools detailed in this research, can help professionals, researchers and students to gain insights into the potential of context-aware systems, here in particular cognition-aware systems.

Biography

Benjamin Tag is a Ph.D. candidate and researcher at the Graduate School of Media Design at Keio University. His research interest is located in the fields of Human-Computer Interaction, with special focus on Cognition-Aware Systems. He is investigating ways to understand human cognition by combining methods from the fields of cognitive psychology and pervasive computing. Specifically, he is interested in using ubiquitous technologies to augment the process of knowledge acquisition through implementing proactive recommender and intervention systems.

Feb. 1

Large Scale Training of Deep Convolutional Neural NetworksDr. Naoya Maruyama Research Scientist, Lawrence Livermore National Lab
Host: Dr. Dong Li

Abstract

TBD

Biography

TBD

Feb. 8

Towards Better User Interfaces for 3D Dr. Wolfgang Suerzlinger Professor, School of Interactive Arts + Technology, Simon Fraser University, Canada
Host: Dr. Ahmed Arif

Abstract

TBD

Biography

TBD

Feb. 15

Multi-robot Exploration of Spatial-temporal Varying Fields, Dr. Wencen Wu, Assistant Professor, Department of Computer Engineering, San Jose State University
Host: Dr. Wan Du

Abstract

TBD

Biography

TBD

Feb. 22

Analytics for Text Entry Methods and Research, Dr. Scott MacKenzie, Associate Professor, Department of Electrical Engineering & Computer Science, York University, Canada
Host: Dr. Ahmed Arif

Abstract

A common task for users of desktop or mobile computers is the input of text. Whether preparing a report or 'texting' a friend about where to have lunch, we can't avoid this ubiquitous computing task. This talk will explore analytic methods of characterizing and comparing text input methods. We are interested in quantifying the work invested to enter text. For the Qwerty keyboard, entering "hello" takes five keystrokes. If input uses a soft keyboard on a smartphone combined with word completion, fewer keystrokes, or finger taps, are required. A special case is an ambiguous keyboard: fewer keys, >1 letter per key. The phone keypad places 26 letters on just 8 keys. But, what about other keyboards with 26 letters on, say, 7 keys, or 6 keys, or 5 keys, or ... How about text entry with just one key? This talk will present, compare, and quantify the text input process for a variety of keyboards, some with as few a 1 key.

Biography

Scott MacKenzie's research is in human-computer interaction with an emphasis on human performance measurement and modeling, experimental methods and evaluation, interaction devices and techniques, text entry, touch-based input, language modeling, accessible computing, gaming, and mobile computing. He has more than 170 peer-reviewed publications in the field of Human-Computer Interaction (including more than 30 from the ACM's annual SIGCHI conference) and has given numerous invited talks over the past 25 years. In 2015, he was elected into the ACM SIGCHI Academy. That same year he was the recipient of the Canadian Human-Computer Communication Society's (CHCCS) Achievement Award. Since 1999, he has been Associate Professor of Computer Science and Engineering at York University, Canada.

March 1

A New Discipline for a New Technology: Food Informatics and the Internet of Food, Dr. Matthew Lange, Research Scientist & Professional Food and Health Informatician, Food Science and Technology Department, University of California, Davis
In conjuction with the CITRIS FIT Seminar
Host: Dr. Joshua Viers

Abstract

TBD

Biography

TBD

March 8

Fast and Parallelizable Ranking with Outliers from Pairwise Comparisons, Mahshid (Ashley) Montazer Qaem, Ph.D. Candidate, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Sungjin Im

Abstract

TBD

Biography

TBD

March 22

Stochastic Gradient Descent on Modern Hardware: Multi-core CPU or GPU? Synchronous or Asynchronous? Yujing Ma, Ph.D. Candidate, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Florin Rusu

Abstract

TBD

Biography

TBD

April 5

Generative Models for Robots Learning, Dr. Ajay Kumar Tanwani, Postodoctoral Scholar, Electrical Engineering and Computer Science, University of California, Berkeley
Host: Dr. Stefano Carpin

Abstract

TBD

Biography

TBD

April 12

D3D: Distilled 3D Networks for Video Action Recognition, Dr. David Ross Researcher, Google AI
Host: Dr. Miguel Carreira-Perpinan

Abstract

TBD

Biography

TBD

April 19

'Learning to Synthesize for Natural Image and Video Editing', and 'Learning to Stitch Videos', Yijun Li and Wei-Sheng Lai, Ph.D. Student and Ph.D. Candidate, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Alberto Cerpa

Abstract

Visual synthesis is the process of creating new data through manipulating, editing or re-organizing existing data. However, attempts from non-experts often end up deviating from the manifold of real natural data, leading to unrealistic results with undesired artifacts. The goal of my research is to develop effective computational models to facilitate more realistic and stunning creations, which will bring brand new user experiences and transform the ways we communicate and collaborate. Along this direction, I have explored four different topics, including image enhancement, completion, stylization and video prediction. In this talk, I will mainly introduce the background and achievements of one topic, i.e., image/video stylization, which focuses on recomposing an image with new styles. Such a technique is not only useful for novel designs and creations, but also is an important step towards the understanding of factors that constitute images.

Biography

Yijun Li (https://sites.google.com/site/yijunlimaverick/) is a Ph.D. student in Vision and Learning Lab at University of California Merced, working with Prof. Ming-Hsuan Yang. His research interests lie in the areas of computer vision, computational photography, and machine learning. He previously received his M.S. degree from Shanghai Jiao Tong University and B.S. degree from Zhejiang University.

Abstract

Despite the long history of image and video stitching research, existing academic and commercial solutions still produce strong artifacts. In this work, we propose a wide-baseline video stitching algorithm that is temporally stable and tolerant to strong parallax. Our key insight is that stitching can be cast as a problem of learning a smooth spatial interpolation between the input videos. To solve this problem, inspired by pushbroom cameras, we introduce a fast pushbroom interpolation layer and propose a novel pushbroom stitching network, which learns a dense flow field to smoothly align the multiple input videos with spatial interpolation. Our approach outperforms the state-of-the-art by a significant margin, as we show with a user study, and has immediate applications in many areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.

Biography

Wei-Sheng Lai (http://graduatestudents.ucmerced.edu/wlai24/) is a Ph.D. candidate of Electrical Engineering and Computer Science at the University of California Merced, under the advisement of Prof. Ming-Hsuan Yang. He received the B.S. and M.S. degree in Electrical Engineering from the National Taiwan University, Taipei, Taiwan, in 2012 and 2014, respectively. His research interests include computer vision, computational photography, and machine learning.

April 26

Human in the Loop: How to Satisfy User Comfort Requirements and Save Energy, Claudia Chitu Ph.D. Candidate, Visiting Fulbright Scholar, Electrical Engineering and Computer Science Department, University of California, Merced & Electronics, Telecommunications and Information Technology, University Politechnica of Bucarest, Romania
Host: Dr. Alberto Cerpa

Abstract

TBD

Biography

TBD

May 3

Adaptive and Curious Deep Learning for Perception, Action, and Explanation,Dr. Trevor Darrell, Professor, Director Berkeley Deep Drive (BDD), Co-Director Berkeley Artificial Intelligence Research (BAIR), Computer Science Department, University of California Berkeley
Host: Dr. Shawn Newsam

Abstract

TBD

Biography

TBD

May 10

Smart Buildings: HVAC Occupancy and Comfort-Based Model Predictive Control, Ashish Yadav, Graduate Student, Electrical Engineering and Computer Science Department, University of California, Merced
Host: Dr. Alberto Cerpa