John Glossner, Ph.D.
Abstract: Heterogeneous processing represents the future of computing, promising to unlock the performance and power efficiency of the parallel computing engines found in most modern electronic devices. This talk will detail the HSA computing platform infrastructure including features/advantages across computing platforms from mobile and tablets to desktops to HPC and servers. The talk will focus on technical issues mapping DSPs to HSA systems using GPT's new DSP processor as a representative example. The presentation will also discuss important new developments that are bringing the industry closer to broad adoption of heterogeneous computing.
Biography: Dr. John Glossner is President of the Heterogeneous System Architecture (HSA) Foundation and CEO of Optimum Semiconductor Technologies. OST and its processor division General Processor Technologies (GPT-US). Previously he served as Chair of the Board of the Wireless Innovation Forum. In 2010 he joined Wuxi DSP (a licensee of Sandbridge technology and parent company of OST) and was named a China 1000 Talents. He previously co-founded Sandbridge Technologies and received a World Economic Forum award. Prior to Sandbridge, John managed both technical and business activities in IBM and Lucent/Starcore. John received a Ph.D. in Electrical Engineering from TU Delft in the Netherlands, M.S degrees in E.E. and Eng. Mgt from NTU, and a B.S.E.E. degree from Penn State. He has more than 40 patents and 120 publications.
Vince D. Calhoun, Ph.D.The Mind Research Network & The University of New Mexico
Overview: Brain imaging technology provides a way to sample various aspects of the brain albeit incompletely, providing a rich set of features crossing rest and task conditions, and an ever-growing number of imaging modalities. The conditions being studied with brain imaging data are often extremely complex and it is becoming more common for researchers to employ multiple measures (e.g. structural connectivity, task-related brain activity, functional connectivity, dynamic connectivity) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features, modalities or tasks. In this talk I will present two complementary approaches to this problem. The first involved joint fusion of multimodal imaging data and cognitive information using an approach that combines several matrix factorization approaches and incorporates higher order statistics. Such an approach enables extraction of features that are jointly optimizes for multi-modal data sets. The second approach involves the development of an intuitive framework based on Markov-style flows for understanding information exchange between features in what we are calling a feature meta-space: that is, a space consisting of an arbitrary number of individual feature spaces, where the features can have any dimension and can be drawn from any data source or modality. This approach enables us to identify relationships between disparate features of varying dimensionality. For both approaches, we show simulations as well as an application to real data from a large schizophrenia brain imaging data set including rest fMRI, diffusion weighted imaging, and structural MRI as well as non-imaging measures including symptoms and cognitive scores. In sum, there are still significant challenges that lay ahead for maximizing the information we can obtain from multiple high-dimensional data sets. The approach we present provide a powerful way to balance both extraction of relevant features and also summarizing the extracted information in a way that can be interpreted and used to learn more about the brain and to make decisions from the obtained information.
Dennis PratherDept. of Electrical and Computer Engineering, University of Delaware
By offering enhanced frequency re-use, "small cells" have been proposed to address the need for increased capacity in future wireless networks. However, while a more-dense cellular topology is indeed attractive, their deployment is challenging due to the need for additional real estate, permits, and access to back-haul. As an alternative, dense sectoring is being proposed, wherein RF beam forming is used to "sectorize" the Tx/Rx capabilities of a base station into smaller angular regions, which also allows for enhanced frequency re-use. However, dense sectoring is challenged by co-channel and adjacent-channel interference (CCI and ACI), which inevitably arises due to nonlinear operations that result in signal intermixing and intermodulation as the Rx aperture receives all sectors simultaneously. In short, present beam-forming Rx arrays do not adequately discriminate between the multitude of spatial and spectral signals that are simultaneously received at a base station.
To address these challenges, we present a Tx/Rx array that first "images" the spatial and spectral signals and subsequently "detects" them, thereby eliminating intermixing and intermodulation and thereby allows for full spatial/spectral discrimination and hence full frequency re-use in each sector. By analogy, visible imaging systems inherently perform such spatial/spectral discrimination by first performing a spatial mapping of the scene with a lens and then subsequently performing a spectral analysis of the signal to determine color. As an example consider a Christmas tree with multi-colored lights. From a signal detection perspective, each light is first spatially mapped, or imaged, onto the retina, which effectively renders a spatially orthogonal signal plane, i.e., each point of origin in the source plane is focused to a separate and distinct point in the image plane that does not overlap with any adjacent points. Subsequently, the inherently non-linear process of detection is performed to determine the spectral nature of the imaged point, however, because each point is spatially separated from every other point, i.e., non-overlapping, signal intermixing does not take place due to the orthogonal nature of the imaging process and the inherent isolation it provides.
In the context of wireless networks, the various colors represent frequency re-use and the various spatial locations correspond to the sectors within a wireless cell. In this talk, we present such an imaging/receiver system that operates in the wireless spectrum. In so doing, it provides inherent CCI and ACI suppression over many spatial sectors and thereby enables ultra dense frequency reuse. Moreover, this approach allows for massive MIMO capability as all spatial sectors are imaged simultaneously with latencies limited only by the propagation delay of the wireless signals themselves. This talk will also present an efficient and high capacity Tx system that complements the "imaging-receiver" in terms of spatial/spectral signal exploitation.
Biography: Dennis Prather began his professional career by joining the US Navy in 1982, where he still serves in the reserves and serves as a CAPT (O-6) Engineering Duty Officer. After active duty, he received the BSEE, MSEE, and PhD from the University of Maryland in 1989, 1993, and 1997, respectively. During this time he worked as a senior research engineer for the Army Research Laboratory, where he performed research on both optical devices and architectures for information processing. His efforts included work on the modeling, design, and fabrication of meso-scale optical elements and their integration with active opto-electronic devices, such as semiconductor lasers and focal plane arrays. In 1997 he joined the Department of Electrical and Computer Engineering at the University of Delaware. Currently he is the College of Engineering Distinguished Professor and his research focuses on both the theoretical and experimental aspects of RF-photonic elements and their integration into various systems for imaging, communications and Radar. To achieve this, his lab develops computational electromagnetic models and fabrication/integration processes necessary for the demonstration of state-of-the-art RF-photonic devices such as: ultra-high bandwidth modulators, silicon photonic RF sources, photonic crystal chip-scale routers, meta-material antennas, and integrated RF-Photonic phased arrays.
Professor Prather is currently an Endowed Professor of Electrical Engineering, he is a senior member of the IEEE, Fellow of the Society of Photo-Instrumentation Engineers (SPIE) and a Fellow of the Optical Society of America (OSA). He has authored or co-authored over 500 scientific papers, holds over 20 patents, and has written 14 books/book-chapters.
Overview: The new codec for Enhanced Voiced Services (EVS), standardized by the 3rd Generation Partnership Project (3GPP) in September 2014, is a result of the 3GPP effort to provide a radically enhanced user experience for Voice over LTE (VoLTE) services. In the 3GPP study of use cases and requirements finalized in 2010, the objectives were set not only to significantly improve the existing voice communication quality of narrowband (NB) and wideband (WB) speech provided by the previous 3GPP codecs, but to enhance the user experience by introducing superwideband (SWB) speech covering up to 16 kHz of bandwidth. The study further required enhanced quality for generic audio content, robustness to packet losses and delay jitter, and backward interoperability with AMR-WB to streamline the new codec deployment.
The resulting EVS codec spans the whole range of communications scenarios from very efficient low bitrate speech coding at 5.9 kb/s, up to transparent coding of generic audio content at 128 kb/s, covering the full audio bandwidth of 20 kHz. EVS offers high robustness against packet loss and it is the first communication codec providing state-of-the-art rendering of music signals. This has been achieved by building upon best speech and music coding technologies of previous standards with significant new improvements and functionalities.
The presentation will cover the architecture of the EVS codec and give an overview of its major building blocks. It will explain the key advancements in EVS, in particular in the two main building blocks of the codec – the linear-prediction based coding of speech-dominant content and the transform-domain coding of generic audio – and the seamless switching between both models. The picture will be completed by presenting other important improvements in EVS, such as the techniques that make the codec robust to packet loss.
This presentation complements the keynote "Standardization and Performance of the New 3GPP EVS Codec", which details the standardization process as well as the performance evaluation of the EVS codec.
Biography: Martin Dietz studied Electrical Engineering in Erlangen, Germany. Working on audio compression algorithms already during his studies, he joined the Fraunhofer Institute for Integrated Circuits (IIS) in 1992, where he contributed to the research and development of audio compression algorithms, namely mp3, AAC and MPEG-4 audio. In 2000 he left IIS to run Coding Technologies, the company that developed and marketed High-Efficiency AAC (HE-AAC). After the acquisition of Coding Technologies in 2007 he worked for Dolby until 2010. Since 2011 he acts as a consultant, as which he helped Fraunhofer IIS with its contribution to the development of Enhanced Voice Services (EVS), the most recent 3GPP codec for conversational applications standardized in 2014.
Overview: The new codec for Enhanced Voiced Services (EVS), standardized by the 3rd Generation Partnership Project (3GPP) in September 2014, is the result of a 3GPP effort to provide a radically enhanced user experience for Voice over LTE (VoLTE) service. The EVS codec addresses a wide range of communication scenarios comprising high-quality super-wideband (SWB) and full-band (FB) voice operation as well as high-capacity/high-quality narrowband (NB) and wideband (WB) voice operation. Optimum performance in any operating point in these scenarios and additionally unique music/non-speech signal performance and high robustness in error-prone VoIP transmission frameworks make the codec clearly the best choice among all known communication codecs. On top of this the EVS codec maintains backward compatibility with AMR-WB, thus avoiding any interoperability problems or any hard-cut decisions against AMR-WB during the introduction of the new Enhanced Voice Services.
The presentation will provide an insider perspective into the standardization process of the new EVS codec. It will describe how the industry with its many competing players managed in an unpreceded effort to successfully develop and standardize this codec in an open, fair and constructive process. The presentation also enables an understanding of the performance of the codec both in relation to the performance requirements set by 3GPP for the EVS codec standardization and compared to other presently used state-of-the-art communication codecs.
This presentation complements the keynote "Technology Advancements in the new 3GPP EVS Codec", which details the architecture and the algorithmic improvements of the EVS codec.
Biography: Stefan Bruhn received his PhD in Electrical Engineering from the Technical University of Berlin, Germany, in 1995 and has been with Ericsson since then. He has been active in the field of low rate speech and audio compression since 1989. Currently he holds a position as expert in the area of media coding technologies within Ericsson Research where he is involved in matters related to media codec research, standardization and strategies. He made major contributions to the standardization of various 3GPP and ITU-T speech codecs. He is chairing the 3GPP SA4 EVS sub-working group that has standardized the new 3GPP codec for Enhanced Voice Services. He holds more than 30 granted patents and has published more than 35 conference and journal papers. He was awarded Ericsson inventor of the year 2004. His research interests cover speech and audio compression, speech and audio enhancement, and system aspects of digital communication systems.
Chang Wen ChenState University of New York at Buffalo, USA
Overview: Streaming video content over HTTP, and consequently, via TCP, has become one of the popular techniques for consumer entertainment in the past decade. Early deployments of HTTP streaming are based on the client-server model, in which the client opens a TCP connection to a video server and progressively downloads the video content. The adaptive HTTP streaming solutions such as DASH require the clients to feedback bandwidth information to the server to modify the streaming session on-demand to maximize the streaming performance. In the new era of cloud mobile media, two paradigm shifting changes have occurred in which the video servers are now located in the cloud at one end while the clients use the mobile devices to access and view the video content at the other end. These two fundamental changes pose significant challenges because (1) adaptive HTTP streaming now needs to assemble video contents from multiple servers in the cloud, and (2) the traditional client-server model does not work for mobile device users whose connection to the cloud-based video servers is centrally controlled by the wireless access networks. In this talk, several challenging technical issues and corresponding solutions will be illustrated. It will be demonstrated that innovative techniques can be designed to resolve these challenging issues and achieve much improved streaming performance in the new era of cloud mobile media.
Biography: Chang Wen Chen is an Empire Innovation Professor of Computer Science and Engineering at the State University of New York at Buffalo. He has been Allen Henry Endow Chair Professor at the Florida Institute of Technology from 2003 to 2007. He was on the faculty of Electrical and Computer Engineering at the University of Rochester from 1992 to 1996, on the faculty of Electrical and Computer Engineering at the University of Missouri-Columbia from 1996 to 2003.
He has been the Editor-in-Chief for IEEE Trans. Multimedia since January 2014. He has also served as the Editor-in-Chief for IEEE Trans. Circuits and Systems for Video Technology from 2006 to 2009. He has been an Editor for several major IEEE Transactions and Journals, including the Proceedings of IEEE, IEEE Journal of Selected Areas in Communications, and IEEE Journal of Journal on Emerging and Selected Topics in Circuits and Systems. He has served as Conference Chair for several major IEEE, ACM and SPIE conferences related to multimedia video communications and signal processing.
He received his BS from University of Science and Technology of China in 1983, MSEE from University of Southern California in 1986, and Ph.D. from University of Illinois at Urbana-Champaign in 1992. He and his students have received eight (8) Best Paper Awards or Best Student Paper Awards over the past two decades. He has also received several research and professional achievement awards, including the Sigma Xi Excellence in Graduate Research Mentoring Award in 2003, Alexander von Humboldt Research Award in 2009, and the State University of New York at Buffalo Exceptional Scholar – Sustained Achievement Award in 2012. He is an IEEE Fellow and an SPIE Fellow.
Michèle Wigger, PhDTelecom ParisTech, Paris, France
Abstract: Pre-storing data in caches (memories) close to the end users during periods of low network congestion or good connectivity, is one of the most promising ways to increase rates and to decrease latency and energy consumption in future communication and compression systems. Before such systems can be put in place, important questions however have to be addressed: How much and which information should be pre-stored in the caches? How should this information be pre-stored? How should one communicate or compress in the presence of pre-stored data? What are the benefits in rates, latency, and energy efficiency that can be attained through caching?
In this talk we will present recent information-theoretic results addressing theses questions. Specifically, we will explain and analyze simple algorithms, and provide intuition on how to derive information-theoretic upper bounds on the optimal performance. We shall also present the new concept of joint cache-channel coding (like our recent piggyback coding) as a way to improve communication in cache-aided networks, and we will illustrate the roles of the Wyner and the Gacs-Korner common informations in cache-aided compression systems. A particular focus of the talk will be on energy savings offered by caching.
The talk is based on joint work with Bernhard Geiger and Roy Timo from TU Munich, and with Shirin Saeedi Bidokhti from Stanford University.
Biography: Michèle Wigger (S’05–M’09–SM’14) received the M.Sc. degree in electrical engineering (with distinction) and the Ph.D. degree in electrical engineering both from ETH Zurich in 2003 and 2008, respectively. In 2009, she was a Postdoctoral Researcher at the ITA Center at the University of California, San Diego, USA. Since December 2009, she has been an Assistant Professor and a Associate Professor at Telecom ParisTech, in Paris, France. She has been an associate editor of the IEEE Communication Letters since December 2012. Her main research interests are in multi-terminal information theory, in particular in distributed source coding, and capacities of networks with states, feedback, user cooperation, or caching.