2010 IEEE International Workshop
on Multimedia Signal Processing

Saint-Malo, France, October 4-6, 2010

Organizing Committee
Technical Committee
Call for papers
Top 10% Award
Keynote Speakers
Technical program
Social program
Registration - open
Logistics Support
Research Grants










































































Keynote Speakers

Levent Onural

IEEE Fellow, Bilkent University

"Signal Processing Based Research Issues in 3DTV"

A typical 3DTV chain has capture, representation, compression, transmission, display interface and display stages. Each stage has its own specific nature and problems. And there are many alternative technologies for implementing each of these functional units. Signal processing tools play an important role in each such stage. The capture unit deals with difficult video data fusing problems. The post capture signal processing needs may range from nil in simplest 3DTV operations to demanding time-varying 3D model generation in sophisticated ones. Coding and compression of 3DTV video has its own specific nature and solutions. Probably the most complicated and demanding signal processing is at the display interface stage since 3D displays are quite different than 2D displays, and furthermore, since 3D displays come in many different forms. There are signal processing needs even within the camera and displays units. Among all different 3D modes, true 3D versions whichtarget physical duplication of information carrying light, such asholography and integral imaging, have their own rich signal processing needs. The signal processing problems associated especially with holographic 3DTV are unique and by far more demanding, and therefore, has the potential to trigger a new line of sophisticated signal processing techniques and associated mathematics.

Levent Onural received his Ph.D. degree in electrical and computer engineering from State University of New York at Buffalo in 1985; his BS and MS degrees are from METU in 1979 and 1981, respectively. He was a Fulbright scholar between 1981 and 1985. He joined the Electrical and Electronics Engineering Department of Bilkent University, Ankara, Turkey, in 1987 where he is a full professor at present. His current research interests are in the area of image and video processing, with emphasis on video coding, 3DTV, holographic 3DTV and signal processing aspects of optical wave propagation. He was the coordinator of European Commission funded 3DTV Project (2004-2008). Currently, he is the co-leader of the 3D Immersive Interactive Media Cluster (formerly 3D Media Cluster) which is an umbrella organization formed by many European Commission funded 3D-related projects. He is an active researcher and a board member of the European Community funded Real 3D Project (2008-2011) which focuses on fundamentals of end-to-end holographic 3D imaging systems. Dr. Onural received an award from TUBITAK of Turkey in 1995. He also received a Third Millenium Medal from IEEE in 2000. Dr. Onural is a fellow of IEEE. He served IEEE as the Director of IEEE Region 8 (Europe, Middle East and Africa) in 2001-2002, and as the Secretary of IEEE in 2003. He was a member of IEEE Board of Directors (2001-2003), IEEE Executive Committee (2003) and IEEE Assembly (2001-2002).


Phil Chou

IEEE Fellow, Microsoft Research

"Telepresence : from Virtual to Reality"

The teleconferencing industry newsletter Wainhouse Report defines Telepresence as "a videoconferencing experience that creates the illusion that the remote participants are in the same room with you." Today Telepresence is embodied in the marketplace by solutions such as HP Halo and Cisco Telepresence, dedicated conference rooms sporting built-in furniture and life-sized high-definition video, costing hundreds of thousands of dollars per room.  In the future, Telepresence systems will be more diverse, enabling connections between not only meeting rooms but also offices, hotel rooms, vehicles, and even large unstructured spaces such as conference halls and stadiums.  Mixed reality as well as ubiquitous computing - including robotics - will play key roles, because these systems will not only need to immerse the participants in a common world, but will also need to empower the participants in ways that are better than being physically present.  In this talk, I will take you on a tour of various component technologies as well as experiences that are being developed in Microsoft Research for the future of Telepresence.  Along the way will be evident many opportunities for advances in multimedia signal processing.

Philip A. Chou received the BSE degree from Princeton University, Princeton, NJ, in 1980, and the MS degree from the University of California, Berkeley, in 1983, both in electrical engineering and computer science, and the PhD degree in electrical engineering from Stanford University in 1988. From 1988 to 1990, he was a Member of Technical Staff at AT&T Bell Laboratories in Murray Hill, NJ. From 1990 to 1996, he was a Member of Research Staff at the Xerox Palo Alto Research Center in Palo Alto, CA. In 1997 he was manager of the compression group at VXtreme, an Internet video startup in Mountain View, CA, before it was acquired by Microsoft in 1997. From 1998 to the present, he has been a Principal Researcher with Microsoft Research in Redmond, Washington, where he currently manages the Communication and Collaboration Systems research group. Dr. Chou has served as Consulting Associate Professor at Stanford University 1994-1995, Affiliate Associate Professor at the University of Washington 1998-2009, and Adjunct Professor at the Chinese University of Hong Kong since 2006. Dr. Chou has longstanding research interests in data compression, signal processing, information theory, communications, and pattern recognition, with applications to video, images, audio, speech, and documents. He served as an Associate Editor in source coding for the IEEE Transactions on Information Theory from 1998 to 2001, as a Guest Editor for special issues in the IEEE Transactions on Image Processing, the IEEE Transactions on Multimedia (TMM), and IEEE Signal Processing Magazine in 1996, 2004, and 2011, respectively. He was a member of the IEEE Signal Processing Society (SPS) Image and Multidimensional Signal Processing technical committee (IMDSP TC), where he chaired the awards subcommittee 1998-2004. Currently he is chair of the SPS Multimedia Signal Processing TC, member of the ComSoc Multimedia TC, member of the IEEE SPS Fellow selection committee, and member of the TMM and ICME Steering Committees.  He was the founding technical chair for the inaugural NetCod 2005 workshop, special session and panel chair for ICASSP 2007, publicity chair for the Packet Video Workshop 2009, and technical co-chair for MMSP 2009.  He is a Fellow of the IEEE, a member of Phi Beta Kappa, Tau Beta Pi, Sigma Xi, and the IEEE Computer, Information Theory, Signal Processing, and Communications societies, and was an active member of the MPEG committee. He is the recipient, with Tom Lookabaugh, of the 1993 Signal Processing Society Paper Award; with Anshul Seghal, of the 2002 ICME Best Paper Award; with Zhourong Miao, of the 2007 IEEE Transactions on Multimedia Best Paper Award; and with Miroslav Ponec, Sudipta Sengupta, Minghua Chen, and Jin Li, of the 2009 ICME Best Paper Award. He is co-editor, with Mihaela van der Schaar, of the 2007 book from Elsevier, Multimedia over IP and Wireless Networks.


Ton Kalker

IEEE Fellow, HP Labs

"Protected Video Distribution in the Networked Age"

The way in which professional music is distributed and consumed has changed dramatically over the last 10 years.  For this transitional period, the three key concepts that stand-out are  ‘Napster’, ‘iPod’ and Digital Rights Management (DRM). Currently, we have arrived at a stable situation where most of the digital audio distribution is controlled by a single retailer, and digital music is no longer encumbered by DRM. However, it is unclear that the distribution and consumption of professional digital video will follow the path of digital music. It might very well be that the future of digital video will include a strong DRM component. Why this might be the case, what form distribution of digital video will take, and why the inclusion of DRM might be less controversial than feared, will be the topic of this talk.

Ton Kalker is a Distinguished Technologist at Hewlett-Packard Laboratories. He made significant contributions to the field of media security, in particular digital watermarking, robust media identification and interoperability of Digital Rights Managements systems. His history in this field of research started in 1996, submitting and participating in the standardization of video watermarking for DVD copy protection. His solution was accepted as the core technology for the proposed DVD copy protection standard and earned him the title of Fellow of the IEEE. His subsequent research focused on robust media identification, where he laid the foundation of the Content Identification business unit of Philips Electronics, successful in commercializing watermarking and other identification technologies. In his Philips period he has co-authored 30 patents and 39 patent applications. His interests are in the field of signal and audio-visual processing, media security, biometrics, information theory and cryptography.Joining Hewlett-Packard in 2004, he focused his research on the problem of non-interoperability of DRM systems. He became one of the three lead architects of the Coral consortium, publishing a standard framework for DRM interoperability in the summer of 2007. Subsequently he served as chair of the Technical Working Group of DECE. He participates actively in the academic community, through students, publications, keynotes, lectures, membership in program committees and serving as conference chair. He is one of the co-founders of the IEEE Transactions on Information Forensics. He is the former chair of the associated Technical Committee of Information Forensics and Security. He served for 6 years as visiting faculty at the University of Eindhoven. He is currently a visiting professor at the Harbin Institute of technology.


Pier Luigi Dragotti

Electrical and Electronic Engineering Department at Imperial College, London

"On the sampling and compression of the plenoptic function"

Image based rendering (IBR) is a promising way to produce arbitrary views of a scene using images instead of object models. In IBR, new views are rendered by interpolating available nearby images. The plenoptic function, which describes the light intensity passing through every viewpoint in every directions and at all times, is a powerful tool to study the IBR problem. In fact, image based rendering can be seen as the problem of sampling and interpolating the plenoptic function.We therefore first briefly review some classical results on the spectral properties of the plenoptic function and then provide a closed-form expression for its bandwidth under the finite-field-of-view contraint. This naturally leads to an adaptive sampling strategy where the local geometrical complexity of the scene is used to adapt the sampling density of the plenoptic function. In this context, we also present an adaptive images-based-rendering algorithm based around an adaptive extraction of depth layers, where the rendering system automatically adapts the minimum number of depth layers according to the scene observed and to the spacing of the sample cameras. Finally, we discuss the problem of compressing the multiple images acquired for image-based rendering and present competitive centralized and distributed compression algorithms.This talk is based on work done with a number of collaborators, in particular, M. Brookes (ICL), C. Gilliam (ICL), A. Gelman (ICL), V. Velisavlievic (Deutsche Telekom) and J. Berent (Google inc.).

Pier Luigi Dragotti is currently a Senior Lecturer (Associate Professor) in the Electrical and Electronic Engineering Department at Imperial College, London. He received the Laurea Degree (summa cum laude) in Electrical Engineering from the University Federico II, Naples, Italy, in 1997; the Master degree in Communications Systems from the Swiss Federal Institute of Technology of Lausanne (EPFL), Switzerland in 1998; and PhD degree from EPFL, Switzerland, in April 2002 (thesis adviser Prof. M. Vetterli). In 1996, he was a visiting student at Stanford University, Stanford, CA, and, from July to October 2000, he was a summer researcher in the Mathematics of Communications Department at Bell Labs, Lucent Technologies, Murray Hill, NJ. Before joining Imperial College in November 2002, he was a senior researcher at EPFL working on distributed signal processing for sensor net-works for the Swiss National Competence Center in Research on Mobile Information and Communication Systems. Dr Dragotti is the co-organizer of the following special sessions: 'Image Compression beyond Wavelets' at the Visual Communications and Image Process-ing conference (VCIP 2003), 'Sensing Reality and Communicating Bits' at the IEEE Interna-tional Conference of Acoustic, Speech and Signal Processing (ICASSP 2006), 'Signal/Image Reconstruction from Sparse Measurements' at the IEEE International Con-ference on Image Processing (ICIP 2006) and 'Sparsity and Sampling' at the SPIE Con-ference on Wavelet Applications in Signal and Image Processing, Wavelets XII. Dr Dragotti is an associate editor of the IEEE Transactions on Image Processing and a member of the IEEE Image, Video and MultiDimensional Signal Processing (IVMSP) Technical Committee.


Bernhard Grill

Audio Department, Fraunhofer Institute for Integrated Circuits IIS

"High Definition Communication - What it takes to implement it and what difference does it make?"

The audio quality of voice connections has remained virtually unchanged for more than 100 years. In most cases the audio bandwidth is still constrained to 3.5 kHz and nobody should expect to recognize, by listening to the sound, what is going on in the background of a call. With IP connections being used more and more for voice communication several attempts are now made to improve the situation. Some propose to considerably increase the audio bandwidth while others go as far as to promote communication in "CD-Quality" which could even include stereo or multi channel audio to fully transmit the accoustical image of the background of the speaker. What are the benefits to the user and what does it take to implement such services, as far as the audio components are concerned? This talk will try to give an overview about various systems proposed and what difference they can provide in user experience.

Bernhard Grill was born in Schwabach, Germany, in 1961. He received a M.S. (Diplom) degree in electrical engineering from the University of Erlangen-Nuremberg. During his days at the Fraunhofer Institute for Integrated Circuits IIS (1988-1995) and the University of Erlangen-Nuremberg (1995-1998) he contributed to the development of several per-ceptual audio coder systems. These include OCF, ASPEC, ISO/IEC MPEG-1/2 Audio Layer-3 (mp3), and MPEG-2 Advanced Audio Coding (AAC). Later work concentrated on sca-lable audio coding, now part of MPEG-4 Audio. In 1999 Bernhard Grill was back to Fraunhofer IIS, and two years later attained his Ph.D. in electrical engineering. He is currently the head of the Audio Department. His latest works include MP3 Surround and MPEG-Surround, the latter standardised in 2006. Further projects are Digital Rights Ma-nagement, multimedia transport over IP and broadcast applications. In September 2000 he received the Fellowship Award of the Audio Engineering Society for his work on MPEG-4 Audio and scalable audio coding. In October 2000, he and two colleagues were presented with the "German Future Award" of the German President for their work on MP3.


Stéphane Donikian

Inria Rennes Bretagne Atlantique

"Interactive Digital Art, a need for authoring tools to orchestrate the multimodal interaction between spectators
and Art pieces"

Interactive poly-artistic works is a type of expression becoming increasingly common nowadays. Consequently, users, specta(c)tors, expect more and more to play an active part in these works. Such creations always require the use of a wide range of techno-logies (3D video and audio display, video and audio synthesis, body tracking…), and a large number of computer environments, software and frameworks have been created to fulfill these needs. However, despite this important profusion in terms of technical tools, several issues remain unsolved when realizing such artistic works. First, in the context of collaborative arts, existing frameworks do not provide means for con-ceptualizing art pie-ces for contributors coming from different artistic areas (composition, choreography, video, 3D graphics…). Second, establishing communications between software or hardware components is often complicated. Finally, the communication process and its language have to be redefined from scratch for each new realization. We will introduce ConceptMove which is a unified paradigm for describing interactive poly-artistic works.
In the second part of this talk we will focus on Interactive Storytelling, which can be regarded as a new genre, deriving both from interactive media such as video games and from narrative media such as cinema or litterature. Whatever degree of interactivity, free-dom, and non-linearity might be provided, the role that the interactor is assigned to play always has to remain inside the boundaries thus defined by the author, and which convey the essence of the work itself. This brings an extra level of complexity for writers, when tools at their disposal remain limited compared to technological evolutions.

Stéphane Donikian received his M.S. (1989), PhD (1992) and Habilitation to direct research (2004) from the University of Rennes 1. From 1994 to March 2007, he has been Research Scientist for CNRS and is now Research Director at INRIA. He has been member of the SIAMES project at IRISA between 1989 and 2006. In September 2006 he founded the Bunraku team whose main scientific objective is to allow real and virtual humans to naturally interact in a shared virtual environment. His research interests include Virtual Reality, Virtual Humans, Reactive and Cognitive Behavioural Animation, Informed Virtual Environments, Scenario Authoring Tools for VR applications, Interactive Drama, and VR Middleware. He has conducted or participated to several national and European research projects with industrial and academic partners. He is the initiator and co-founder in Ja-nuary 2009 of Golaem, a company spinned off by Bunraku. Since November 2009, he is in vacancy of INRIA to work as CTO of Golaem.