2024 7th International Conference on Signal Processing and Information Communications

MANILA, PHILIPPINES | APRIL 24-26, 2024

Keynote Speaker


Prof. Maryline Chetto, University of Nantes, France

Maryline Chetto is currently a full professor in computer engineering with Nantes Université, France and researcher with CNRS. She received the degree of Docteur de 3ème cycle in control engineering and the degree of Habilitée à Diriger des Recherches in Computer Science from the University of Nantes, France, in 1984 and 1993, respectively. From 1984 to 1985, she held the position of Assistant professor of Computer Science at the University of Rennes, while her research was with the Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Rennes. In 1986, she returned to Nantes and has been from 2002 a full professor with the University of Nantes. She is conducting her research at Laboratoire des Sciences du Numérique de Nantes (LS2N, UMR CNRS n° 6004) in the Real Time System group.Her research has been focused on development and formal validation of solutions regarding Scheduling, Fault-tolerance and Dynamic Power Management in real time embedded applications. Her current research is specifically targeting real-time scheduling issues in energy neutral devices. She has more than 170 papers published in international journals and conferences. She was the editor of the books Real-time Systems Scheduling (Elsevier, 2014) with volume 1 Fundamentals and Volume 2 Focusses. She was the co-author of the book Energy Autonomy of real-time systems (Elsevier, 2016). She has been general co-chair of the 2020 IEEE International Conference on Green Computing and Communications (GreenCom-2020). From 2011 to 2023, she has been member of the French National Council for Universities.

Title: Power Management for Autonomous Cyber Physical Systems with Real-time Considerations

Abstract: A growing number of applications (e.g. medical, automotive) involve many wireless devices that may be deployed in wide areas and possibly unattainable places. Such systems should be designed to function perpetually without any human intervention to charge or replace batteries because either costly or impractical. As a consequence, energy harvesting technology has been an area of rapid development during the last decade.

Energy harvesting is a technology that allows to capture unused ambient energy. It is converted into electrical energy which is used immediately or later through a storage unit for powering small devices such as sensors which, in addition to energy limitations, have to cope with real-time constraints. Energy neutrality is the central requirement of autonomous real-time computing systems that should consume no more energy than harvested.

Unfortunately, most of environmental energy sources are fluctuating and not controllable. It means that a stable power supply cannot be relied upon that makes challenging the issue of compliance with hard real-time constraints. Specific power management and scheduling solutions have to be conceived in order to prevent energy starvation and guarantee real-time responsiveness. Task scheduling should take into account not only the timing parameters of the deadline constrained tasks such as worst-case execution times but also energy consumptions, profile of the energy source and capacity of the energy storage unit. This keynote addresses state of the art as well as our findings in real-time scheduling and processor activity management for wireless energy harvesting devices.

Invited Speakers


Prof. Wenwu Wang,  University of Surrey, UK

Wenwu Wang is a Professor in Signal Processing and Machine Learning, and a Co-Director of the Machine Audition Lab within the Centre for Vision Speech and Signal Processing, University of Surrey, UK. He is also an AI Fellow at the Surrey Institute for People Centred Artificial Intelligence. His current research interests include signal processing, machine learning and perception, artificial intelligence, machine audition (listening), and statistical anomaly detection. He has (co)-authored over 300 papers in these areas. He has been involved as Principal or Co-Investigator in more than 30 research projects, funded by UK and EU research councils, and industry (e.g. BBC, NPL, Samsung, Tencent, Huawei, Saab, Atlas, and Kaon). He is a (co-)author or (co-)recipient of over 15 awards including the 2022 IEEE Signal Processing Society Young Author Best Paper Award, ICAUS 2021 Best Paper Award, DCASE 2020 Judge’s Award, DCASE 2019 and 2020 Reproducible System Award, LVA/ICA 2018 Best Student Paper Award, FSDM 2016 Best Oral Presentation, and Dstl Challenge 2012 Best Solution Award. He is an Associate Editor (2020-2025) for IEEE/ACM Transactions on Audio Speech and Language Processing, an Associate Editor (2022-) for (Nature) Scientific Report, and a Specialty Editor (2021-) in Chief of Frontier in Signal Processing. He was a Senior Area Editor (2019-2023) and Associate Editor (2014-2028) for IEEE Transactions on Signal Processing. He is a Board Member (2023-2024) of IEEE Signal Processing Society (SPS) Technical Directions Board, the elected Chair (2023-2024) of IEEE SPS Machine Learning for Signal Processing Technical Committee, the Vice Chair (2022-2024) of the EURASIP Technical Area Committee on Acoustic Speech and Music Signal Processing, an elected Member (2022-2024) of the IEEE SPS Signal Processing Theory and Methods Technical Committee, and an elected Member (2019-) of the International Steering Committee of Latent Variable Analysis and Signal Separation. He was a Satellite Workshop Co-Chair for INTERSPEECH 2022, a Publication Co-Chair for IEEE ICASSP 2019, Local Arrangement Co-Chair of IEEE MLSP 2013, and Publicity Co-Chair of IEEE SSP 2009. He is a Satellite Workshop Co-Chair for IEEE ICASSP 2024.

Title: Generative AI for Text to Audio Generation

Abstract: Text-to-audio generation aims to produce an audio clip based on a text prompt which is a language description of the audio content to be generated. This can be used as sound synthesis tools for film making, game design, virtual reality/metaverse, digital media, and digital assistants for text understanding by the visually impaired. To achieve cross modal text to audio generation, it is essential to comprehend the audio events and scenes within an audio clip, as well as interpret the textual information presented in natural language. In addition, learning the mapping and alignment of these two streams of information is crucial. Exciting developments have recently emerged in the field of automated audio-text cross modal generation. In this talk, we will give an introduction of this field, including problem description, potential applications, datasets, open challenges, recent technical progresses, and possible future research directions. We will focus on the deep generative AI methods for text to audio generation. We will start with our earlier work on conditional audio generation published in MLSP 2021 which was used as the baseline system in DCASE 2023. We then move on to the discussion of several algorithms that we have developed recently, including AudioLDM, AudioLDM2, Re-AudioLDM, and Wavjourney, which are getting increasingly popular in the signal processing, machine learning, and audio engineering communities.


Dr. Ralph Gerard B. Sangalang, Batangas State University- The National Engineering, Philippines

Ralph Gerard B. Sangalang Ralph Gerard B. Sangalang received his B.S. degree in Electronics and Communications Engineering and M.S. in Electronics Engineering at Batangas State University, Philippines. He obtained the Ph.D. in Electrical Engineering and Ph.D. in Electronics Engineering under the double degree program at National Sun Yat-Sen University, Taiwan and Batangas State University- The National Engineering University, Philippines in 2023 and 2024, respectively. He was awarded the Yeh Kung-Chie Memorial Scholarship Award at NSYSU in 2023. Currently, he is an Assistant Professor at Batangas State University--TNEU where he also the Center Head of the Electronic Systems Research Center (ESRC). He was the Program Chair of BS Electronics Engineering during 2017-2021 and Interim Program Chair of the BS Biomedical Engineering. He is a member of Batangas State University's CenTraL or the Center for Transformative Learning. He has served as reviewer in ISCAS, AICAS, CSSP, IJE, Kybernetika, and IJCDS. His research interests include memory design, AI circuits, digital systems, control systems, computational modeling, fractional circuits, and engineering education.

Title: A Low Power High Performance Digital Logic Accelerator used in Deep Neural Network for Underwater Object Recognition in an Underwater Autonomous Vehicle

Abstract: This talk presents a design of a digital logic accelerator (DLA) that uses output reuse and hardware padding. The DLA is used in the object detection subsystem of an underwater autonomous vehicle. A modified deep neural network based on YoloV3-tiny algorithm is used for the object detection. The designed network can detect up to 20 different objects such as sharks, turtles, and even divers. The DLA uses parallel architecture for the kernel, input, and output to increase the performance. Also, a new Inter-controller is designed to control the Direct Memory Access (DMA) and a new Reshape module is introduced to improve the power efficiency. Detailed design description and measurement on silicon are presented. The chip is realized using the General Purpose TSMC 180-nm CMOS Mixed Signal RF process. The DLA showed a near real-time demonstrated result at 19.88 frames per second (fps) at a performance of 40.96 GOPS and power of 196.8 mW.