A multi-projector immersive installation exploring sound-converted video. Individual frames are algorithmically converted to sound and back, producing glitch imagery that shifts in real time with viewer presence and the specificity of the space.
The ideas leading to my current body of work incorporate several concepts I have been researching for a number of years. What is included in the gallery involves sound-converted video — individual video frames are converted to sound files and back through algorithmic code.
This conversion process involves concepts such as abundance, collection, organization, reconfiguration, juxtaposition, and aesthetic elevation. Recorded video and sound are processed in real time throughout the installation — data processed then projected back onto the space.
The resulting imagery is subject to change by viewer interaction and site specificity. No two presentations of the work are identical.
Designed and deployed a GPU-powered AI hub running on sustainable solar energy for global research collaboration. Principal Investigator. Applied for NVIDIA Academic Grant Program.
2025–PresentCo-PI on GPU-accelerated learning environments for Accounting (with NOUN scholars, Nigeria; and SUNY Poly College of Business) and Fluid Mechanics (with SUNY Poly College of Engineering).
2024–PresentDeveloped secure GPU-based retrieval-augmented generation (RAG) and model-serving infrastructure for academic and creative research, integrating OpenWebUI and Cloudflare pipelines.
2025–PresentAssociate Professor & Coordinator of the Interactive Media & Game Design program at SUNY Polytechnic Institute. Building curriculum at the intersection of creative computing, AI systems, and game design.
2025–Present