Skip to content Skip to navigation

2016-17

Fall 2016

Aug. 26

Combining Virtual Reality, Psychology, Theater and Learning Sciences for Training and Assessment, Arjun Nagendran, University of Central Florida & Mursion

Abstract

Technological advancements over the last decade have opened up a plethora of possibilities for ``blue-skies'' research. Today, we live in a world where multi-disciplinary teams collaborate to create novel platforms that enhance every aspect of our life. We now have the foundations to inject our cross-disciplinary ideas across traditional fields of study. Identifying voids and applying our expertise across domains results in powerful products that can have a significant impact in society. This talk will be centered around an application that leverages the ever-diminishing boundaries between virtual reality, psychology and the learning sciences. The concepts of ``Avatars'' and ``Inhabiting'' will be introduced, following which real-world applications of their use will be demonstrated. In particular, the talk will focus on how human-assisted virtual avatars can be used for the purposes of training and assessment across several fields such as healthcare, counselling, hospitality and education. The effectiveness of these systems in riding the upcoming wave of Virtual Reality devices including the Oculus RIFT, Gear VR and the Microsoft HoloLens will be discussed. The talk will conclude with potential futuristic applications of the concept of ``inhabiting''.

Biography

Arjun Nagendran is the Co-Founder and Chief Technology Officer at Mursion, Inc., a San-Francisco-based startup specializing in the application of virtual reality technology for the purposes of training and assessment. He completed his Ph.D. in Robotics from the University of Manchester, UK, specializing in landing mechanisms for Unmanned Air Vehicles. Prior to Mursion, he worked for several years as an academic researcher including leading the ground vehicle systems for Team Tumbleweed, which was one of the six-finalists at the Ministry of Defense (UK) Grand Challenge. Arjun's research interests include coupling psychology and learning sciences with technological advancements in remote-operation, virtual reality, and control system theory to create high-impact applications. During his academic career, he has served as a committee member and reviewer for several conferences and journals including the International Conference on Intelligent Robots and Systems (IROS) and IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

Sept. 2

A Parallel Sorting Algorithm for 130K CPU Cores, Bin Dong, Lawrence Berkeley National Lab

Abstract

Parallel sorting is a fundamental algorithm in computer science and it becomes apparently more important in the big data era. Utilizing supercomputers to perform sorting is attractive since their large amount of CPU cores have the potential power to sort terabyte or even exabyte data per minute. However, developing a parallel sorting algorithm that is efficient and scalable for a supercomputer is a challenge because of the load imbalance caused by the data skewness and also the complex communication patterns caused by multi-core architectures. In this talk, I present our experience in developing and scaling a parallel sorting algorithm named SDS-Sort on a 2.57 petaflops/sec supercomputer.

Biography

Bin Dong is currently a research scientist in the Scientific Data Management group at LBNL. His research interests are in scalable scientific data management, parallel storage systems, and parallel computing. More specifically, he is exploring new algorithms and data structures for storing, organizing, sorting, indexing, searching, and analyzing big scientific data (mostly multi-dimensional arrays) with supercomputers. Bin Dong earned his Ph.D. in Computer Science and Technology from Beihang University, China in 2013. Then, he joined the Scientific Data Management group at LBNL as a postdoc until 2016.

Sept. 9

Robot Motion Planning Considering Multiple Costs and Multiple Task Specifications, Shams Feyzabadi, UC Merced

Abstract

With the recent dramatic growth in robotic commercial applications in all fields, expectations from robotic systems have escalated as well. For example, robots are tasked with increasingly more complex missions featuring multiple costs that should be accounted for. In addition, with robots operating for extended periods of times in unstructured environments, it is often convenient to task the robot with multiple objectives at once and let the system determine a control strategy jointly considering all of them. In this talk we propose a planner to solve sequential stochastic decision making problems where robots are subject to multiple cost functions and are tasked to complete more than one goal specified using a subset of linear temporal logic operators. Each subgoal is associated with a desired satisfaction probability that will be met in expectation by the policy executed by the controller. The planner builds upon the theory of constrained Markov Decision Processes and on techniques coming from the realm of formal verification. Our method is validated both in simulation and in outdoor tasks in which the robot autonomously traveled more than 7.5km.

Biography

Shams Feyzabadi is currently a PhD candidate at UC Merced working under the supervision of Prof. Carpin. His field of interest is mobile robotics and more specifically he focuses on motion planning considering multiple cost functions in non-deterministic environments. He received his M.Sc. from Jacobs University Bremen in Germany in 2010 and his B.Sc. from Iran University of Science and Technology in 2007.

Sept. 16

Building the Enterprise Fabric for Big Data with Vertica and Spark Integration, Jeff LeFevre, HPE Vertica

Abstract

Enterprise customers increasingly require greater flexibility in the way they access and process their Big Data. Their needs include both advanced analytics and access to diverse data sources. However, they also require robust, enterprise-class data management for their mission-critical data. This work describes our initial efforts toward a solution that satisfies the above requirements by integrating the HPE Vertica enterprise database with Apache Spark's open source computation engine. In this talk I will focus on our methods for fast and reliable bulk data transfers between Vertica and Spark with exactly once semantics. I will first describe the architectures of both systems, the challenges to guarantee exactly once semantics for data transfers, and the interesting tradeoffs among these challenges for our design. Specifically, our design enables parallel data transfer tasks that can tolerate task failures, restarts, and speculative execution; we show how this can be done without an external scheduler coordinating the reliable transfer between the two independent systems under these conditions. We believe this approach generalizes to the class of MapReduce systems. Lastly I will present performance results across several system configurations and datasets. Our integration provides a fabric on which our customers can get the best of both worlds: robust enterprise-class data management and analytics provided by Vertica, and flexibility in accessing and processing Big Data with Vertica and Spark.

Biography

Jeff LeFevre is a Software Engineer with HPE Vertica Big Data R&D in Sunnyvale, CA where he focuses on the integration with Spark. He joined Vertica in 2014 after completing his PhD from the Database Group at UC Santa Cruz. His dissertation focuses on physical design tuning for data management systems in the cloud. Prior to that he received an MS from the Systems Group at UC San Diego, and completed internships at Teradata, Google, and NEC Labs.

Sep 23

Enabling Analytics at AWS, Mehul Shah, Amazon Web Services

Abstract

With the ubiquity of data sources and cheap storage, today's enterprises want to collect and store a wide variety of data, even before they know what to do with it. Examples include IOT streams, application monitoring logs, point-of-sale transactions, ad impressions, mobile events, and more. This data is typically a mix of structured and unstructured, streaming and static, with varying degree of quality. Given this variety and the increasing need to be data-driven, customers want a choice of tools to leverage this data for business advantage. Towards this end, Amazon Web Services (AWS) offers a variety of fully-managed data services that can be easily composed given its service-oriented architecture. In this talk, we provide an overview of the breadth of data services available on AWS: storage, OLTP, data warehouse, and streaming. We give examples of how customers leverage and compose these to handle their big data use cases from traditional BI and analytics to real-time processing and prediction. Finally, we touch on some lessons from operating such services at scale.

Biography

Mehul is a software development manager in the Big Data division of AWS, contributing to the Redshift and Data Pipeline services. From 2011-2014, he was co-founder and CEO of Amiato, an ETL cloud service. Prior to that, he was a research scientist at HP Labs where his work spanned large-scale data management, distributed systems, and energy-efficient computing. He received his PhD in databases from UC Berkeley (2004), and MEng (1997) and BS in computer science and physics (1996) from MIT. He has received several awards including the NSDI 2016 Test of Time Award and SOSP 2007 best paper. In his spare time, he serves on the SortBenchmark committee.

Sept. 30

MacroBase: Analytic Monitoring for the Internet of Things, Peter Bailis, Stanford University

Abstract

An increasing proportion of data today is generated by automated processes, sensors, and devices—collectively, the Internet of Things (IoT). IoT applications’ rising data volume, demands for time-sensitive analysis, and heterogeneity exacerbate the challenge of identifying and highlighting important trends in IoT deployments. In response, we present MacroBase, a data analytics engine that performs statistically-informed analytic monitoring of IoT data streams by identifying deviations within streams and generating potential explanations for each. MacroBase is the first analytics engine to combine streaming outlier detection and streaming explanation operators, allowing cross-layer optimizations that deliver order-of-magnitude speedups over existing, primarily non-streaming alternatives. As a result, MacroBase can deliver accurate results at speeds of up to 2M events per second per query on a single core. MacroBase has delivered meaningful analytic monitoring results in production, including an IoT company monitoring hundreds of thousands of vehicles.

Biography

Peter Bailis is an assistant professor of Computer Science at Stanford University. Peter's research in the Future Data Systems group (http://futuredata.stanford.edu/) focuses on the design and implementation of next-generation, post-database data-intensive systems. His work spans large-scale data management, distributed protocol design, and architectures for high-volume complex decision support. He is the recipient of an NSF Graduate Research Fellowship, a Berkeley Fellowship for Graduate Study, best-of-conference citations for research appearing in both SIGMOD and VLDB, and the CRA Outstanding Undergraduate Researcher Award. He received a Ph.D. from UC Berkeley in 2015 and an A.B. from Harvard College in 2011, both in Computer Science.

Oct. 7

Apache SystemML: Declarative Machine Learning at Scale, Niketan Pansare, IBM Almaden Research Center

Abstract

Scalable machine learning is ubiquitous in virtually every industry ranging from insurance, manufacturing, finance, and health sciences. Expressing and and running machine learning algorithms for varying data characteristics and at scale is challenging. In this talk, we will discuss our experience in building Apache SystemML, peak at challenging optimization and implementation strategies in exploiting data-parallel platforms such as MapReduce and Spark, and provide performance and scalability insights.

Biography

Niketan Pansare works at IBM Research Almaden, on advanced information management systems that include analytics, distributed data processing platforms, hardware acceleration, as well as the application of it in mobile and cloud. At a high level, his research involves developing statistical models and building systems for analyzing large-scale and heterogenous data. Prior to joining IBM, Niketan was a PhD student at Rice University where he was advised by Dr. Chris Jermaine. His PhD thesis is titled "Large-Scale Online Aggregation Via Distributed Systems."

Oct. 14

Working around the CAP Theorem Vijayshankar Raman, IBM Almaden Research Center

Abstract

CAP theorem is a painful reality that all distributed systems have to deal with -- they must either assume a tightly coupled setting, or have inconsistent global state. But real-world applications never have tightly coupled components. Instead, they rely on an elaborate compensation logic that is built into the application program, usually outside the boundary of a database transaction. We present a method to achieve serializable consistency in a loosely coupled setting, by allowing such compensation logic to be part of the transaction itself, and serializing each transaction to a point_after its commit.

Biography

Vijayshankar Raman is a Research Staff Member in the database group at the IBM Almaden Research Center, working on hybrid transaction and analytic processing.

Oct. 21

Bridging the I/O Gap between Spark and Scientific Data Formats on Supercomputer, Jialin Liu, Lawrence Berkeley National Lab

Abstract

Spark has been tremendously powerful in performing Big Data analytics in distributed data centers. However, using the Spark framework on HPC systems to analyze large-scale scientific data has several challenges. For instance, the parallel file systems are shared among all computing nodes in contrast to shared-nothing architectures. Another challenge is in accessing data stored in scientific data formats, such as HDF5 and NetCDF, that are not natively supported in Spark. Our study focuses on improving I/O performance of Spark on HPC systems for reading large scientific data arrays, e.g., HDF5/netCDF. We select several scientific use cases to drive the design of an efficient parallel I/O API for Spark on supercomputer, called H5Spark. We optimize the I/O performance, taking into account the Lustre file system striping. We evaluate the performance of H5Spark on Cori, a Cray XC40 system, located at NERSC/LBNL and compared the I/O performance with MPI and NASA’s SciSpark. The developed H5Spark has enabled the success of the largest PCA run on supercomputer and been used by various national labs. It’s now endorsed by HDF company for further development.

Biography

Jialin Liu is a research engineer in Lawrence Berkeley National Lab. He joined LBNL shortly after receiving his Ph. D. in computer science from Texas Tech University in 2015. Before that, he got his B. S. in computer science in 2011. His research interests are parallel I/O and scientific data management (typically millions of files and TBs of data). Recently, he has been exploring object-based big science data management and I/O formats design for Astronomy dataset.

Nov. 4

Nonconvex Optimization by Complexity Progression, Hossein Mobahi, Google Research

Abstract

A large body of machine learning problems require minimization of a nonconvex objective function. For some of these problems, local optimization techniques (such as gradient descent, Newton method, etc.) may converge slowly or get stuck in suboptimal solutions. In this talk I describe an alternative approach for tackling nonconvex optimization. The idea is to start from a simpler optimization problem, and solving that. Then we progressively transform that objective function to the actual one, while tracking the path of the minimizer. While this general idea has been used for a long while, its construction has been quite heuristic. Specifically, there is no principled and theoretically justified answer about how to choose the initial (simplified) problem and how to transform that to the actual problem. The success of this technique drastically depends on these choices. In this talk I argue that Weierstrass transform (Gaussian convolution) is a sensible choice for creating simpler problems, and support this claim mathematically. I present the application of this method to problems in deep learning and image registration.

Biography

Hossein Mobahi (http://people.csail.mit.edu/hmobahi) is a research scientist at Google, Mountain View. His research interests include machine learning, optimization, computer vision, and especially the intersection of the three. Prior to Google, he was a postdoctoral researcher in the Computer Science and Artificial Intelligence Lab. (CSAIL) at MIT. He obtained his PhD from the University of Illinois at Urbana-Champaign (UIUC) in 2012.

Nov. 18

Bootstrap and Uncertainty Propagation: New Theory and Techniques in Approximate Query Processing, Kai Zeng, Microsoft Research

Abstract

Sampling is one of the most commonly used techniques in Approximate Query Processing (AQP)--an area of research that is now made more critical by the need for timely and cost-effective analytics over “Big Data”. The sheer amount of data and the complexity of analytics pose new challenges to sampling-based AQP, calling for innovations in various tech aspects. These include: how to estimate the errors of general SQL queries with ad-hoc user defined functions if computed on samples? How to better present the approximate query results to the user? How to build the database engines to be more suitable for approximate query processing? In this talk, I will present a series of my work which answers the important questions mentioned above. We will see: (1) An automated statistics technique--bootstrap can be integrated with relational algebra theory and database systems, and provides accuracy estimation support for general OLAP queries. (2) With bootstrap error estimation technique in combination with a novel uncertainty propagation theory, OLAP query processing can shift to an incremental execution engine, which provides a smooth trade-off between query accuracy and latency, and fulfills a full spectrum of user requirements from approximate but timely query execution to a more traditional accurate query execution.

Biography

Kai Zeng is a senior scientist at Cloud and Information Service Lab, Microsoft. His research interest lies in large scale data intensive systems. He received his PhD in Database from UCLA in 2014. He used to work at AMPLab UC Berkeley as a postdoc researcher. He has won several awards, including SIGMOD 2012 best paper award and SIGMOD 2014 best demo award.

Dec. 2

Modeling and Fast Numerical Methods for Fractional Partial Differential Equations, Hong Wang, University of South Carolina

 

Spring 2017

Jan. 20

Advanced Database Techniques for Scientific Data Processing, Weijie Zhao, UC Merced

Abstract

Scientific applications are generating an ever-increasing volume of multi-dimensional data that are largely processed inside distributed array databases and frameworks. Similarity join is a fundamental operation across scientific workloads that requires complex processing over an unbounded number of pairs of multi-dimensional points. In this talk, we introduce a novel distributed similarity join operator for multi-dimensional arrays. Unlike immediate extensions to array join and relational similarity join, the proposed operator minimizes the overall data transfer and network congestion while providing load-balancing, without completely repartitioning and replicating the input arrays. We define formally array similarity join and present the design, optimization strategies, and evaluation of the first array similarity join operator. Meanwhile, if the data are rapidly updated, the join result can be considered as a view, which is defined by the similarity join. We model the process as incremental view maintenance with batch updates and give a three-stage heuristic that finds effective update plans. Moreover, the heuristic repartitions the array and the view continuously based on a window of past updates as a side-effect of view maintenance. We design an analytical cost model for integrating materialized array views in queries. A thorough experimental evaluation confirms that the proposed techniques are able to incrementally maintain a real astronomical data product in a production pipeline.

Biography

Weijie Zhao is a PhD student in the EECS graduate group at UC Merced, working with Prof. Florin Rusu. He received his BS from East China Normal University in Shanghai. His research interests include databases and scientific data management. Weijie is an avid computer programming contestant.

Jan. 27

Image Editing and Learning Filters for Low-level Vision, Yi-Hsuan Tsai and Sifei Liu, UC Merced

Abstract

In the first part of this talk, we present a sematic-aware image editing algorithm for automatic sky replacement. The key idea of our algorithm is to utilize visual semantics to guide the entire process including sky segmentation, search and replacement. First, we train a deep convolutional neural network for semantic scene parsing, which is used as visual prior to segment sky regions in a coarse-to-fine manner. Second, in order to find proper skies for replacement, we propose a data-driven scheme based on semantic layout of the input image. Finally, to re-compose the stylized sky with the original foreground naturally, an appearance transfer method is developed to match statistics locally and semantically. We show that the proposed algorithm can automatically generate a set of visually pleasing and realistic results. In the second part, a work on learning image filters for low-level vision is presented (e.g., edge-preserving filtering and denoising), in which a unified hybrid neural network is proposed. The network contains several spatially variant recurrent neural networks (RNN) as equivalents of a group of distinct recursive filters for each pixel, and a deep convolutional neural network (CNN) that learns the weights of RNNs. The proposed model does not need a large number of convolutional channels nor big kernels to learn features for low-level vision filters. Experimental results show that many low-level vision tasks can be effectively learned and carried out in real-time by the proposed algorithm.

Biography

Yi-Hsuan Tsai (https://sites.google.com/site/yihsuantsai/) received the B.S. in Electronics Engineering from National Chiao-Tung University, Hsinchu, Taiwan and the M.S. in Electrical Engineering and Computer Science from University of Michigan, Ann Arbor. He is currently working toward the PhD advised by Prof. Ming-Hsuan Yang at UC Merced and is the recipient of the Graduate Dean's Dissertation Fellowship in 2016. He was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. His research interests include computer vision, computational photography and machine learning with the focus on visual object recognition and image editing. He also did research internships at Qualcomm Research, Max Planck Institute and Adobe Research.
Sifei Liu (http://www.sifeiliu.net/) is a Ph.D candidate in Electrical Engineering and Computer Science with Prof. Ming-Hsuan Yang. Her research interests are in computer vision and machine learning. She completed her M.C.S. at University of Science and Technology of China (USTC) under Stan.Z Li and Bin Li, and received the B.S. in control science and technology from North China Electric Power University. She received the Baidu fellowship in 2013. In 2013 and 2014, she was a intern at Baidu Deep Learning Institute. In addition, she was a visiting student at the Chinese University of Hong Kong in 2015. She was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. Her research interests include computer vision, machine learning and computational photography.

Feb. 3

It's All about Cache, Ming Zhao, Arizona State University

Abstract

This talk is about cache, and more specifically, solid-state storage based cache for large-scale computing systems such as cloud computing and big data systems. With the increasing workload data intensity and increasing level of consolidation in such systems, storage is becoming a serous bottleneck. Emerging solid-state storage devices such as flash memory and 3D Xpoint have the potential to address this scalability issue by providing a new caching layer between main memory and hard drives in the storage hierarchy. However, solid-state storage has limited capacity and endurance, and needs to be managed carefully when used for caching. This talk will present several recent works done by the ASU VISA Research Lab for addressing these limitations and making effective use of solid-state caching.
First, the talk will introduce CloudCache, an on-demand cache allocation solution for understanding the cache demands of workloads and allocating the shared cache capacity efficiently. It is able to reduce a workload’s cache usage by 78% and the amount of writes sent to cache device by 40%, compared to traditional working-set-based approach. Second, the talk will present CacheDedup, an in-line cache deduplication solution that integrates caching and deduplication with duplication-aware cache replacement to improve the performance and endurance of solid-state caches. It can reduce a workload’s I/O latency by 51% and the amount of writes sent to cache device by 89%, compared to traditional cache management approaches. Finally, the talk will be concluded with a brief overview of the systems research at the VISA lab.

Biography

Ming Zhao is an associate professor of the Arizona State University (ASU) School of Computing, Informatics, and Decision Systems Engineering (CIDSE), where he directs the research laboratory for Virtualized Infrastructures, Systems, and Applications (VISA, http://visa.lab.asu.edu). His research is in the areas of experimental computer systems, including distributed/cloud, big-data, and high-performance systems as well as operating systems and storage in general. He is also interested in the interdisciplinary studies that bridge computer systems research with other domains. His work has been funded by the National Science Foundation (NSF), Department of Homeland Security, Department of Defense, Department of Energy, and industry companies, and his research outcomes have been adopted by several production systems in industry. Dr. Zhao has received the NSF Faculty Early Career Development (CAREER) award, the Air Force Summer Faculty Fellowship, the VMware Faculty Award, and the Best Paper Award of the IEEE International Conference on Autonomic Computing. He received his bachelor’s and master’s degrees from Tsinghua University, and his PhD from University of Florida.

Feb. 10

Visual Understanding: Face Parsing and Video Object Segmentation, Sifei Liu and Yi-Hsuan Tsai, UC Merced

Abstract

In the first part of this talk, we present a work for face parsing via conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments show state-of-the-art and accurate labeling results on challenging images for real-world applications. In the second part, a work on video object segmentation is presented. It is a challenging problem due to fast moving objects, deformed shapes, and cluttered backgrounds. To obtain accurate segmentation across time, we propose an efficient algorithm that considers video segmentation and optical flow estimation simultaneously. For video segmentation, we formulate a principled, multi-scale, spatio-temporal objective function that uses optical flow to propagate information between frames. For optical flow estimation, particularly at object boundaries, we compute the flow independently in the segmented regions and recompose the results. We call the process "object flow" and demonstrate the effectiveness of jointly optimizing optical flow and video segmentation using an iterative scheme.

Biography

Sifei Liu (http://www.sifeiliu.net/) is a Ph.D candidate in Electrical Engineering and Computer Science with Prof. Ming-Hsuan Yang. Her research interests are in computer vision and machine learning. She completed her M.C.S. at University of Science and Technology of China (USTC) under Stan.Z Li and Bin Li, and received the B.S. in control science and technology from North China Electric Power University. She received the Baidu fellowship in 2013. In 2013 and 2014, she was a intern at Baidu Deep Learning Institute. In addition, she was a visiting student at the Chinese University of Hong Kong in 2015. She was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. Her research interests include computer vision, machine learning and computational photography.
Yi-Hsuan Tsai (https://sites.google.com/site/yihsuantsai/) received the B.S. in Electronics Engineering from National Chiao-Tung University, Hsinchu, Taiwan and the M.S. in Electrical Engineering and Computer Science from University of Michigan, Ann Arbor. He is currently working toward the PhD advised by Prof. Ming-Hsuan Yang at UC Merced and is the recipient of the Graduate Dean's Dissertation Fellowship in 2016. He was invited to attend the doctoral consortium at the IEEE Conference on Computer Vision and Pattern Recognition in 2016. His research interests include computer vision, computational photography and machine learning with the focus on visual object recognition and image editing. He also did research internships at Qualcomm Research, Max Planck Institute and Adobe Research.

Feb. 17

Interactive Visual Computing for Knowledge Discovery in Science, Engineering, and Training, Jian Chen, University of Maryland, Baltimore County

Abstract

Imagine computer displays become a space to augment human thinking. Essential human activities such as seeing, gesturing, and exploring can couple with powerful computational solutions using natural interfaces and accurate visualizations. In this talk, I will present research effort to quantify visualization techniques of all kinds. Our ongoing work includes research in: (1) perceptually accurate visualization – constructing a visualization language to study how to depict spatially complex fields in quantum-physics simulations and brain-imaging datasets; (2) using space to compensate for limited human memory – developing new computing and interactive capabilities for bat-flight motion analysis in a new metaphorical interface; and (3) extending exploratory metaphors to biological pathways to make possible integrated analysis of multifaceted datasets. During the talk, I will point to a number of other projects being carried out by my team. I will close with some thoughts on automating the evaluation of visualizations and venture that a science of visualization and metaphors now has the potential to be developed in full, and that its success will be crucial in understanding data-to-knowledge techniques in traditional desktop and immersive settings.

Biography

Jian Chen is an Assistant Professor in the Department of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC), where she leads the Interactive Visual Computing Lab (http:// ivcl.umbc.edu) and UMBC’s Immersive Hybrid Reality Lab (http://tinyurl.com/ ztnvdmf). She maintains general research interests in the design and evaluation of visualizations (encoding of spatially complex brain imaging, integrating spatial and non-spatial data, perceptually accurate visualization, and event analysis) and interaction (exploring large biological pathways, immersive modeling, embodiment, and gesture input). She has garnered best-paper awards at international conferences, and her work is funded by NSF, NIST, and DoD. She is also an UMBC innovation fellow and a co-chair of the first international workshop on the emerging field of Immersive Analytics. Chen did her post-doctoral research at Brown University jointly with the Departments of Computer Science (with Dr. David H. Laidlaw) and Ecology and Evolutionary Biology. She received her Ph.D. in Computer Science from Virginia Tech with Dr. Doug A. Bowman. To learn about Jian Chen and her work, please visit http:// www.csee.umbc.edu/~jichen.

Feb. 24

Situated Intelligent Interactive Systems, Zhou Yu, Carnegie Mellon University

Abstract

Communication is an intricate dance, an ensemble of coordinated individual actions. Imagine a future where machines interact with us like humans, waking us up in the morning, navigating us to work, or discussing our daily schedules in a coordinated and natural manner. Current interactive systems being developed by Apple, Google, Microsoft, and Amazon attempt to reach this goal by combining a large set of single-task systems. But products like Siri, Google Now, Cortana and Echo still follow pre-specified agendas that cannot transition between tasks smoothly and track and adapt to different users naturally. My research draws on recent developments in speech and natural language processing, human-computer interaction, and machine learning to work towards the goal of developing situated intelligent interactive systems. These systems can coordinate with users to achieve effective and natural interactions. I have successfully applied the proposed concepts to various tasks, such as social conversation, job interview training and movie promotion. My team's proposal on engaging social conversation systems was selected to receive $100,000 from Amazon Inc. to compete in the Amazon Alexa Prize Challenge.

Biography

Zhou Yu is a graduating PhD student at the Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky. She interned with Prof. David Suendermann-Oeft in ETS San Francisco Office on cloud based mulitmodal dialog systems in 2015 summer and 2016 summer. She also interned with Dan Bohus and Eric Horvitz in Microsoft Research on human-robot interaction in 2014 Fall. Prior to CMU, she received a B.S. in Computer Science and a B.A. in Linguistics from Zhejiang University in 2011. She worked with Prof. Xiaofei He and Prof. Deng Cai on Machine Learning and Computer Vision, and Prof. Yunhua Qu on Machine Translation.

March 3

Moving Towards Customizable Autonomous Driving, Chandrayee Basu, UC Merced

Abstract

In this talk, I will first present the results of my first research project on autonomous driving as a PhD student in UC Merced. This work was conducted in collaboration with Berkeley Deep Drive (http://bdd.berkeley.edu/project/implicit-communication-through-motion). With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. From the days of Alvinn to the latest autonomous driving technologies like NVidia’s Drive PX, researchers have used Learning from Demonstration to teach autonomous cars how to drive. Therefore, when it comes to customization of autonomous driving, a common answer is that the car should adopt the user’s style. In this project, we questioned this assumption and conducted user research in driving simulator to test our hypothesis. We found that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. The results show that conventional Learning from Demonstration algorithms will be inadequate for personalizing autonomous driving. In the second part of the talk I will discuss the implications of this result in greater detail and present some of the potential algorithms that we can use to augment user demonstrations.

Biography

Chandrayee Basu (http://chandrayee-basu.squarespace.com) is a 2nd year Ph.D. student in UC Merced advised by Prof. Mukesh Singhal, UC Merced and Prof. Anca Dragan, UC Berkeley. She is applying Human-Robot Interaction Algorithms to integrate human interaction into motion planning of autonomous cars. Chandrayee has multi-disciplinary research experience in design, applied machine learning, smart environment and human-robot interaction that she acquired as a Graduate student at UC Berkeley and Carnegie Mellon University.

March 10

Inventing in the Research Lab vs Startups, David Merrill, Lemnos Labs Inc.
This talk is part of the EECS | CITRIS Frontiers in Technology Series - Special Room: COB2-140

Abstract

In this talk I will compare and contrast research vs startup innovation, based on my experiences at Stanford, the MIT Media Lab and Bay Area startups. I’ll discuss how the desired outcomes of each context encourages different kinds of risk and exploration, takeaways from my research experiences, and how we structure the early ideation process at Lemnos Labs where I am an Entrepreneur In Residence.

Biography

David Merrill is a technology executive and hardware startup founder with a computer science and human computer interaction background. His tactile learning system startup Sifteo - based on his Ph.D. from MIT - was acquired by drone-maker 3D Robotics in 2014 to become the kernel of a new consumer product group. At 3D Robotics he took various roles on the team that launched Solo: the Smart Drone in 2015, then he led R&D and IP. Alumnus of MIT, Stanford Computer Science and Symbolic Systems. TED speaker. Human-computer interaction expert. Drone builder. His work has been featured by the Discovery Channel, Popular Science, Wired, and the New York Times. Merrill is currently Entrepreneur in Residence at Lemnos Labs, an early-stage VC firm in San Francisco, working on the next project.

March 17

Data-Based Full-Body Motion Coordination and Planning, Alain Juarez-Perez, UC Merced

Abstract

In this talk I will present new approaches for achieving full-body motion coordination for humanoid virtual characters. I will first present a parametric data-based mobility controller with known coverage and validity characteristics, achieving flexible real-time deformations for locomotion control. I will then present a method for switching between different types of locomotion in order to navigate cluttered environments. The proposed method incorporates the locomotion capabilities of the character in the path planning stage, producing paths that address the trade-off between path length and locomotion behavior choice for handling narrow passages. In the last part, I will introduce a new approach for the coordination of locomotion with manipulation. The approach is based on a coordination model built from motion capture data in order to represent proximity relationships between the action and the environment. The result is a real-time controller that can autonomously produce environment-dependent full-body motion strategies. The obtained coordination model is successfully applied on top of generic walking controllers, achieving controllable characters which are able to perform complete full-body interactions with the environment.

Short Bio

Alain Juarez-Perez is a Ph.D. candidate in the Electrical Engineering and Computer Science graduate group of the University of California, Merced. His work is being developed at the Computer Graphics Lab under the advice of Prof. Marcelo Kallmann and has been supported by a UC-Mexus Doctoral Fellowship. He received his B.S. in Computer Science in 2012 from the University of Guanajuato, and in 2014 he was a visiting research assistant at the USC Institute for Creative Technologies. His research interests include Computer Animation, Data-Driven Algorithms, Motion Capture, Computational Geometry, Machine Learning, Computer Graphics and Motion Planning.

March 24

Securing Internet of Things, Chen Qian, UC Santa Cruz

Abstract

In this talk, I will introduce my recent research projects on Internet of Things (IoT) security. First, I will introduce a physical layer authentication method for RFID tags. Second, I will talk about a fast and reliable protocol for authentication and key agreement among multiple IoT devices based on wireless signal information. Third, I will introduce an IoT data communication framework that guarantees data authenticity and integrity.

Biography

Chen Qian is an Assistant Professor in Department of Computer Engineering at University of California Santa Cruz. He was on the faculty of Computer Science at University of Kentucky 2013-2016. He got his PhD degree from The University of Texas at Austin, where he worked with Simon Lam. His research interests include computer networking and distributed systems, Internet of Things, network security, and cloud computing. He has published more than 60 papers, most which appeared in top journals and conferences including ToN, TPDS, ICNP, INFOCOM, ICDCS, SIGMETRICS, CoNEXT, CCS, NDSS, Ubicomp, etc.

April 7

Stochastic distribution control and its applications Hong Wang, PNNL

Abstract

This seminar presents a brief and selected survey on the advances on stochastic distribution control, where the purpose of the controller design is to control the shape of output probability density functions (pdf) of non-Gaussian and general stochastic systems. This research was motivated through the requirement of distribution shape control of a number of practical systems. In recent years much research has been performed internationally and journal special issues and invited session at major conferences have been seen since 2001. It is expected that this seminar will provide with some up-to-date information on this new area.

Biography

Dr. Hong Wang joined PNNL in February 2016 as a Laboratory Fellow. He is based in the Controls team within the Electricity Infrastructure and Buildings Division of the Energy and Environment Directorate. Prior to joining PNNL, he was a full (chair) professor in process control at the University of Manchester in the U.K. Dr. Wang 's research interests are in advanced modelling and control for complex industrial processes, and fault diagnosis and tolerant control. He originated the research on stochastic distribution control where the main purpose of control input design is to make the shape of the output probability density functions to follow a targeted function. This area alone has found a wide spectrum of potential applications in modelling, data-mining, signal processing, optimization and distributed control systems design. Dr. Wang is the lead author of five books and has published over 300 papers in international journals and conferences. He is a member of three International Federation of Automatic Control Technical Committees and associate editor for IEEE Transactions on Control Systems Technology, IEEE Transactions on Automation Science and Engineering, and seven other international journals. He has been the associate editor of IEEE Transactions on Automatic Control, and has served as IPC member and conference chairman for many international conferences. Dr. Wang has also received several best paper awards from International Conferences including the best paper awards at Int. Conf. Control 2006, the Jasbar Memorial Prize for his outstanding contribution in the Science and Technology Development for paper industries in 2006, best theory paper award at World Congress on Intelligent Control and Automation in 2014 and one of the five finalists for the best application paper prize at 2014 IFAC World Congress. Dr. Wang holds a Ph.D. degree from Huazhong University of Science and Technology (HUST) in P R China.

April 14

No seminar.

April 21

Bridging the Gap in Grasp Quality Evaluation, Shuo Liu, UC Merced

Abstract

Robot grasp planning has been extensively studied in the last decades and it often consists of two different stages, i.e., where to grasp an object and how to measure the quality of a tentative grasp. Because these two processes are computationally demanding, form closure grasps are more widely used in practice instead of force closure grasps, even though the latter is in many cases preferable. In this talk, we introduce our framework to improve grasp quality evaluation. We accelerate the computation of the grasp wrench space, used to measure the grasp quality, by exploiting some geometric insights in the computation of convex hull. In particular, we identify a cutoff sequence to terminate the convex hull calculation with guaranteed convergence to the quality measure. Furthermore, we study how noise at each joint of the manipulator affects grasp quality. Different arm configurations will generate different noise distributions at the end-effector which have a huge impact in the robustness of grasping. In the last part of the talk, I will introduce a grasp planner taking into account the local geometry of the object to be grasped. In particular, for concave objects we exploit the fact that grasping at the concave region can make the grasp more robust. These insights are studied in theory and validated on an experimental platform.

Biography

Shuo Liu received his B.Sc. degree in computer engineer from University of Minnesota in 2012 and his B.Sc. degree in mathematics from Beijing Jiaotong University in 2012. In his Junior year of college, he participated in 2011 Robocup and won the champion title in the Middle Size League. He also participated in IROS 2016 Grasping and Manipulation Challenge and won 2nd place in automation track. Since August 2012 he is pursuing a Ph.D. degree in electrical engineering and computer science at the University of California, Merced, working with Dr.Stefano Carpin. His interests include manipulation, grasping, and computational geometry.

April 28

Multicopter dynamics and control: surviving the complete loss of multiple actuators and rapidly generating trajectories, Mark Mueller, Mechanical Engineering Dept., UC Berkeley

Abstract

Flying robots, such as multicopters, are increasingly becoming part of our everyday lives, with current and future applications including personal transportation, delivery services, entertainment, and aerial sensing. These systems are expected to be safe and to have a high degree of autonomy. This talk will discuss the dynamics and control of multicopters, including some research results on trajectory generation for multicopters and fail-safe algorithms. Finally, we will present the application of a failsafe algorithm to a fleet of drones performing as part of a live theatre performance on New York's Broadway.

Biography

Mark W. Mueller joined the mechanical engineering department at UC Berkeley in September 2016. He completed his PhD studies, advised by Prof. Raffaello D'Andrea, at the Institute for Dynamic Systems and Control at the ETH Zurich at the end of 2015. He received a bachelors degree from the University of Pretoria, and a masters from the ETH Zurich in 2011, both in Mechanical Engineering. http://www.me.berkeley.edu/people/faculty/mark-mueller

May 5

Robots for the Real World, James Gosling, Liquid Robotics
This talk is part of the EECS | CITRIS Frontiers in Technology Series - Special Room: COB2 - 140