keyboard_arrow_up
Advanced AI Solutions for Securities Trading: Building Scalable and Optimized Systems for Global Financial Markets

Authors

Akinde Abdullah, Hassan Omolola, Samuel Olatunde and Oluwadare Aderibigbe, Austin Peay State University, USA

Abstract

We introduce QuidEst, a simplified computer vision-to-audio signal application aiming to alert any autonomous navigator for potential threats in open spaces. It is based on associations made between real-time depth map computations and spatial audio signals according to the proximity of obstacles. QuidEst is a C-based program that correlates nine specific depth map sub-regions of a video frame to spatial sound effects. The depth map is generated via the MiDaS deep neural network method from a USB webcam or cellular phone camera, and audio threads render the sonification within each sub-region with a combination of faded musical notes. The strength of QuidEst is the minimal and cost-effective hardware requirements for its implementation, together with the software aspects that existing open-source external libraries handle.
QuidEst binaries: https://github.com/canessae/Quidest Supplemental video: https://www.youtube.com/watch?v=fsVbh53SRio

Keywords

Network Assistive Technologies, Front and Rear Vision, Spatial Sounds, Deep CNN