Invited Speakers

 
Organization: Texas Instruments
Keynote
Session 1: Communications and Information Theory
January 27, 2011 - 9:00am

 

 

Abstract:

Over the past several years, governments around the world have recognized the importance of reliable power grids, the need to conserve energy, and the dangers posed by excessive greenhouse gases. As seen by blackouts and brownouts around the world, utility capacity in certain locations is inadequate to meet peak demands of customers. In addition, utility customers would like the ability to monitor and more effectively manage their energy usage. Exacerbating peak power needs is the emerging electric vehicle, which will put even higher demands on utilities over the next 10 years. Communications is a basic underlying technology necessary to facilitate the management of these complex scenarios. Communicating information from the electric vehicle, as well as the electric meter, to the utility during the battery charging process is necessary to manage energy usage during peak times; plus EV owners will be given favorable KWH pricing as long as the vehicle can authenticate itself over the network. This talk will mainly focus on international technology initiatives in power line communications around the world relative to the Smart Grid, and the standardization efforts in progress including IEEE P1901.2 and ITU-T G.hnem/G.9955.

 

_______________________________________________________________________________________________________________

 
Organization: Stanford University
Keynote
Session 2: Decision and Control
January 27, 2011 - 1:00pm

 

 

Abstract:

We present some new approaches to synthesis of optimal decentralized control systems. The focus is on stochastic systems, and we address both linear dynamical systems and more general Markov decision processes. We give explicit formulae for optimal controllers for some specific decentralized information structures. These results expose the structure of such controllers, and in particular show the role of estimation and the required state-dimension of optimal controllers. We further discuss the relationship of our approach to well-known methodologies for the centralized case, including spectral factorization and dynamic programming.

 

_______________________________________________________________________________________________________________

A Signal-Processing Approach to Modeling Vision, and Applications

Organization: Cornell University
Keynote
Session 3: Signal Processing
January 28, 2011 - 9:00am

 

 

Abstract:

Current state-of-the-art algorithms that process visual information for end use by humans treat images and video as traditional signals and employ sophisticated signal processing strategies to achieve their excellent performance. These algorithms also incorporate characteristics of the human visual system (HVS), but typically in a relatively simplistic manner, and achievable performance is reaching an asymptote. However, large gains are still realizable with current techniques by aggressively incorporating HVS characteristics to a much greater extent than is presently done, combined with a good dose of clever signal processing. Achieving these gains requires HVS characterizations which better model natural image perception ranging from sub-threshold perception (where distortions are not visible) to suprathreshold perception (where distortions are clearly visible). In this talk, I will review results from our lab characterizing the responses of the HVS to natural images, and contrast these results with 'classical' psychophysical results. I will also present several examples of signal processing algorithms which have been designed to fully exploit these results.

 

_______________________________________________________________________________________________________________

 
Organization: University of Wisconsin-Madison
Keynote
Session 4A: Systems and Hardware Design
January 28, 2011 - 1:00pm

 

 

Abstract:

Parallel computers, once limited to a very small number of machines in dedicated server rooms, are now becoming ubiquitous; in a couple of years many cell phones will also have multicore processors. Both the research community and industry are desperately looking for ways in which to program and make effective use of multiple processors. The prevailing wisdom, based upon several decades of experience in high-end parallel machines, is that a static parallel program is required to achieve parallel execution. Yet as we look at the most successful commercial example of microprocessors, we see that they extract parallelism (albeit at an instruction level) from a statically sequential program. What are the lessons from the past for the future? This talk will argue that statically parallel programs are a poor choice for future parallel computing environments where there is expected to be diversity and anonymity in the hardware platforms on which the application is expected to run in parallel. We will present a novel model for parallel execution, one dynamically executes a statically sequential program in parallel (in a dataflow fashion) on multiple processors. We will present results that demonstrate its effectiveness obtained by executing benchmarks as per the model on existing commercial machines using stock compilers and operating systems.