A audio-reactive setup with multiple geometric shapes randomly switching on every other beat and switching some blending modes + other effects.
Adaptation of the previous Fluid Noise sketch into virtual reality with the HTC Vive.
A room with a morphing noise-displaced sphere created out of boxes and audio reactive walls, mapped to one of the Fluid Noise Layers. Further some additional effects, but best tested in VR ;)
The beginning of some experimenting in Touchdesigner with sound-reactive visuals. Opposed to Processing, it is more performant due to the simplified usage of the GPU instead of the CPU.
It shows a composition of three different sketches blended together to get this interesting multi-layered look. Of course everything reacting to different bands in the audio spectrum.
First ideation for a live music performance at the HTWG Konstanz.
About 10.000 "particles" as circles, filled or outline boxes depending on the music and the different intensity of some bands. Coded only with Processing.
An interactive kinetic typography generator with active speech recognition to change the word which is displayed. Depending on (in this case) the mouse position the user can influence the tile size and the speed of the animation.
Further, it's possible to change different variables like the font or colors if custom keywords are said. Or even just use speech to customize the look and feel of the animation without a mouse.
An AI trained to recognize laughs is used to power this Laugh-O-Meter. An interactive media application right in the browser for a comedy night to be able to tell identify the funniest jokes.
The meter accesses the computer microphone and checks the loudness 60 times a second. If there is laughter recognized in the same moment a laughter-loudness score is noted only showing the highest (or loudest) laugh in a session.
A proper AI training is essential, so the score only rises on laughing and not on screaming or something else.
First attempts in generative graphic creation with 12 points which are connected randomly each time.
A laughter recognizing AI is used to generate new position coordinates for the »k« while also listening to the loudness which is affecting the font size.
For the visual quality and depth, it creates new layers after each recognized laugh and draws trails visualizing the laugh intensity.
A poem made by a human is converted by a machine to an artificially spoken text, which is again recognized by a machine a generatively creating typographic and geometric visuals and cycling through the color spectrum.
The task was to develop an animated way of typographic poem interpretation.
The browser application I developed recognizes speech and displays the most recent word spoken. A text to speech converter was used to create an audio output which is then instantly showed by speech recognition. Additionally, a grid is created mapped to the loudness of the audio that fits perfectly into the screen it is viewed on. On each cycle, the word length and height are measured and with a mathematical equation fitted perfectly into the screen.
A simple graphical 2D tool to create illusionary 3D-like visuals on human input. Slightly morphing the ellipse every time to be imperfect just like human motion.
The idea came from a piece of art created with some unusual tools: a stirrer and a pen.
A simple but very recognizable graphic pattern generator as the basis of a visual identity.
The interactive animation can be used online and in video advertising as an anchor for the brand.
A pro-bono project for a local animal shelter.
They needed an upgrade from their nonexistent visual identity created with paint and word in order to be more attractive for potential pet owners, so they adopt rather than buying a pet.
Different, like every animal in the shelter and happy like the new pet owners, the new logo creator mirrors the values of the shelter. There are big and tiny ones, some are a bit weird with pointy ears or big round ones, it doesn't matter.
They're all imperfectly perfect.